Test Report: QEMU_macOS 17877

                    
                      313e97f706b26b221c5e58ce6be0ee030a1cb1f4:2024-03-28:33789
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.37
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.59
36 TestAddons/Setup 11.06
37 TestCertOptions 12.37
38 TestCertExpiration 197.67
39 TestDockerFlags 12.58
40 TestForceSystemdFlag 12.01
41 TestForceSystemdEnv 10.1
47 TestErrorSpam/setup 9.83
56 TestFunctional/serial/StartWithProxy 10.15
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
72 TestFunctional/serial/ExtraConfig 5.25
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 104.2
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.4
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.51
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39
150 TestMultiControlPlane/serial/StartCluster 10.13
151 TestMultiControlPlane/serial/DeployApp 114.21
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 48.72
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.97
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 4.15
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.98
174 TestJSONOutput/start/Command 9.85
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.24
206 TestMountStart/serial/StartWithMountFirst 10.67
209 TestMultiNode/serial/FreshStart2Nodes 10.03
210 TestMultiNode/serial/DeployApp2Nodes 79.08
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 56.31
218 TestMultiNode/serial/RestartKeepsNodes 7.32
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 2.28
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 21.7
226 TestPreload 9.95
228 TestScheduledStopUnix 10.18
229 TestSkaffold 16.66
232 TestRunningBinaryUpgrade 626.12
234 TestKubernetesUpgrade 18.77
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.45
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.71
250 TestStoppedBinaryUpgrade/Upgrade 582.59
252 TestPause/serial/Start 10.04
262 TestNoKubernetes/serial/StartWithK8s 9.93
263 TestNoKubernetes/serial/StartWithStopK8s 5.89
264 TestNoKubernetes/serial/Start 5.87
268 TestNoKubernetes/serial/StartNoArgs 5.9
270 TestNetworkPlugins/group/auto/Start 9.96
271 TestNetworkPlugins/group/kindnet/Start 9.9
272 TestNetworkPlugins/group/calico/Start 9.87
273 TestNetworkPlugins/group/custom-flannel/Start 9.98
274 TestNetworkPlugins/group/false/Start 10.25
275 TestNetworkPlugins/group/enable-default-cni/Start 9.96
276 TestNetworkPlugins/group/flannel/Start 10
277 TestNetworkPlugins/group/bridge/Start 9.78
279 TestNetworkPlugins/group/kubenet/Start 9.87
281 TestStartStop/group/old-k8s-version/serial/FirstStart 10.08
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
288 TestStartStop/group/no-preload/serial/FirstStart 10.71
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/old-k8s-version/serial/Pause 0.12
294 TestStartStop/group/embed-certs/serial/FirstStart 10.02
295 TestStartStop/group/no-preload/serial/DeployApp 0.09
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
299 TestStartStop/group/no-preload/serial/SecondStart 6.61
300 TestStartStop/group/embed-certs/serial/DeployApp 0.09
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/embed-certs/serial/SecondStart 6.29
305 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/no-preload/serial/Pause 0.1
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.98
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.11
316 TestStartStop/group/newest-cni/serial/FirstStart 9.88
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
326 TestStartStop/group/newest-cni/serial/SecondStart 5.25
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-603000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-603000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.371118083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2e6a890e-6b8f-4628-9bd8-ca7157830d53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-603000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6b1b1d6-f5af-4b5f-9766-84f82c6585d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17877"}}
	{"specversion":"1.0","id":"3f4b0679-c1bd-4a03-836c-da4335edf9bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig"}}
	{"specversion":"1.0","id":"10b70b21-c556-4fbf-a1ee-de6f016dd1a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"18ea28ee-8159-4102-be5a-461bfd744e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d610626-b8ae-46d9-8e7b-32c93b9967bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube"}}
	{"specversion":"1.0","id":"aaaa3e09-ccfd-4f7e-a166-27f4103e24a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"bb64cbbe-aac7-42ae-8434-cb8833957857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd79adc5-132a-421e-bdfc-62da5b0fb087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3b66ea09-cf8c-4786-9ee1-45deba7c1376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcaacdb4-bdf1-45e6-819f-ca7349f2dc8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-603000\" primary control-plane node in \"download-only-603000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad07c6af-0540-4d0b-8054-419023f15256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cea2c0c9-5f99-49f4-8292-71940b75449c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220] Decompressors:map[bz2:0x140006a6a70 gz:0x140006a6a78 tar:0x140006a69a0 tar.bz2:0x140006a69e0 tar.gz:0x140006a69f0 tar.xz:0x140006a6a40 tar.zst:0x140006a6a50 tbz2:0x140006a69e0 tgz:0x1
40006a69f0 txz:0x140006a6a40 tzst:0x140006a6a50 xz:0x140006a6a80 zip:0x140006a6a90 zst:0x140006a6a88] Getters:map[file:0x1400079c8c0 http:0x140008141e0 https:0x14000814230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"681bbf8b-8420-4531-9202-4509d5cceaee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:47:55.263329   15786 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:47:55.263551   15786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:47:55.263554   15786 out.go:304] Setting ErrFile to fd 2...
	I0328 11:47:55.263556   15786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:47:55.263674   15786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	W0328 11:47:55.263762   15786 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17877-15366/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17877-15366/.minikube/config/config.json: no such file or directory
	I0328 11:47:55.265009   15786 out.go:298] Setting JSON to true
	I0328 11:47:55.282555   15786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10047,"bootTime":1711641628,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:47:55.282630   15786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:47:55.288027   15786 out.go:97] [download-only-603000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:47:55.291933   15786 out.go:169] MINIKUBE_LOCATION=17877
	I0328 11:47:55.288120   15786 notify.go:220] Checking for updates...
	W0328 11:47:55.288184   15786 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball: no such file or directory
	I0328 11:47:55.299779   15786 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:47:55.302900   15786 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:47:55.305940   15786 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:47:55.312910   15786 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	W0328 11:47:55.320969   15786 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 11:47:55.321228   15786 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:47:55.325924   15786 out.go:97] Using the qemu2 driver based on user configuration
	I0328 11:47:55.325944   15786 start.go:297] selected driver: qemu2
	I0328 11:47:55.325960   15786 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:47:55.326046   15786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:47:55.328900   15786 out.go:169] Automatically selected the socket_vmnet network
	I0328 11:47:55.335290   15786 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0328 11:47:55.335437   15786 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 11:47:55.335510   15786 cni.go:84] Creating CNI manager for ""
	I0328 11:47:55.335529   15786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0328 11:47:55.335580   15786 start.go:340] cluster config:
	{Name:download-only-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:47:55.340378   15786 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:47:55.344973   15786 out.go:97] Downloading VM boot image ...
	I0328 11:47:55.344991   15786 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso
	I0328 11:48:13.097110   15786 out.go:97] Starting "download-only-603000" primary control-plane node in "download-only-603000" cluster
	I0328 11:48:13.097136   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:13.381865   15786 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 11:48:13.381982   15786 cache.go:56] Caching tarball of preloaded images
	I0328 11:48:13.382786   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:13.388743   15786 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0328 11:48:13.388775   15786 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:13.993666   15786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 11:48:33.441236   15786 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:33.441424   15786 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:34.139761   15786 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0328 11:48:34.139982   15786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-603000/config.json ...
	I0328 11:48:34.139998   15786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-603000/config.json: {Name:mk0d42e3126b55e5ccf673930b82c29c9b85121c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:48:34.141116   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:34.141308   15786 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0328 11:48:34.556207   15786 out.go:169] 
	W0328 11:48:34.560334   15786 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220] Decompressors:map[bz2:0x140006a6a70 gz:0x140006a6a78 tar:0x140006a69a0 tar.bz2:0x140006a69e0 tar.gz:0x140006a69f0 tar.xz:0x140006a6a40 tar.zst:0x140006a6a50 tbz2:0x140006a69e0 tgz:0x140006a69f0 txz:0x140006a6a40 tzst:0x140006a6a50 xz:0x140006a6a80 zip:0x140006a6a90 zst:0x140006a6a88] Getters:map[file:0x1400079c8c0 http:0x140008141e0 https:0x14000814230] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0328 11:48:34.560361   15786 out_reason.go:110] 
	W0328 11:48:34.568218   15786 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:48:34.572264   15786 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-603000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.44723225s)

                                                
                                                
-- stdout --
	* [offline-docker-984000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-984000" primary control-plane node in "offline-docker-984000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-984000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:00:48.500452   17571 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:00:48.500603   17571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:48.500606   17571 out.go:304] Setting ErrFile to fd 2...
	I0328 12:00:48.500608   17571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:48.500753   17571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:00:48.501950   17571 out.go:298] Setting JSON to false
	I0328 12:00:48.519531   17571 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10820,"bootTime":1711641628,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:00:48.519640   17571 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:00:48.525742   17571 out.go:177] * [offline-docker-984000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:00:48.529589   17571 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:00:48.533748   17571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:00:48.529624   17571 notify.go:220] Checking for updates...
	I0328 12:00:48.537804   17571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:00:48.540698   17571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:00:48.543662   17571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:00:48.546746   17571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:00:48.550053   17571 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:00:48.550121   17571 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:00:48.553657   17571 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:00:48.559596   17571 start.go:297] selected driver: qemu2
	I0328 12:00:48.559605   17571 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:00:48.559613   17571 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:00:48.561581   17571 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:00:48.564680   17571 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:00:48.567813   17571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:00:48.567849   17571 cni.go:84] Creating CNI manager for ""
	I0328 12:00:48.567855   17571 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:00:48.567860   17571 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:00:48.567892   17571 start.go:340] cluster config:
	{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:00:48.572315   17571 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:48.579709   17571 out.go:177] * Starting "offline-docker-984000" primary control-plane node in "offline-docker-984000" cluster
	I0328 12:00:48.583669   17571 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:00:48.583701   17571 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:00:48.583711   17571 cache.go:56] Caching tarball of preloaded images
	I0328 12:00:48.583783   17571 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:00:48.583788   17571 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:00:48.583870   17571 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/offline-docker-984000/config.json ...
	I0328 12:00:48.583880   17571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/offline-docker-984000/config.json: {Name:mkd3ebab7a47144d48f59de9748e2172ab9eead7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:00:48.584178   17571 start.go:360] acquireMachinesLock for offline-docker-984000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:00:48.584210   17571 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "offline-docker-984000"
	I0328 12:00:48.584225   17571 start.go:93] Provisioning new machine with config: &{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:00:48.584256   17571 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:00:48.588717   17571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:00:48.603944   17571 start.go:159] libmachine.API.Create for "offline-docker-984000" (driver="qemu2")
	I0328 12:00:48.603976   17571 client.go:168] LocalClient.Create starting
	I0328 12:00:48.604043   17571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:00:48.604072   17571 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:48.604087   17571 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:48.604127   17571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:00:48.604148   17571 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:48.604153   17571 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:48.604547   17571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:00:48.750116   17571 main.go:141] libmachine: Creating SSH key...
	I0328 12:00:48.913174   17571 main.go:141] libmachine: Creating Disk image...
	I0328 12:00:48.913184   17571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:00:48.916841   17571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:48.931425   17571 main.go:141] libmachine: STDOUT: 
	I0328 12:00:48.931449   17571 main.go:141] libmachine: STDERR: 
	I0328 12:00:48.931509   17571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2 +20000M
	I0328 12:00:48.943359   17571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:00:48.943381   17571 main.go:141] libmachine: STDERR: 
	I0328 12:00:48.943402   17571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:48.943407   17571 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:00:48.943445   17571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e7:86:58:93:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:48.945433   17571 main.go:141] libmachine: STDOUT: 
	I0328 12:00:48.945451   17571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:00:48.945470   17571 client.go:171] duration metric: took 341.485458ms to LocalClient.Create
	I0328 12:00:50.946913   17571 start.go:128] duration metric: took 2.362620791s to createHost
	I0328 12:00:50.946931   17571 start.go:83] releasing machines lock for "offline-docker-984000", held for 2.362687041s
	W0328 12:00:50.946948   17571 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:50.953702   17571 out.go:177] * Deleting "offline-docker-984000" in qemu2 ...
	W0328 12:00:50.963864   17571 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:50.963874   17571 start.go:728] Will try again in 5 seconds ...
	I0328 12:00:55.966011   17571 start.go:360] acquireMachinesLock for offline-docker-984000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:00:55.966141   17571 start.go:364] duration metric: took 98.375µs to acquireMachinesLock for "offline-docker-984000"
	I0328 12:00:55.966172   17571 start.go:93] Provisioning new machine with config: &{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:00:55.966219   17571 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:00:55.997016   17571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:00:56.039377   17571 start.go:159] libmachine.API.Create for "offline-docker-984000" (driver="qemu2")
	I0328 12:00:56.039408   17571 client.go:168] LocalClient.Create starting
	I0328 12:00:56.039515   17571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:00:56.039545   17571 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:56.039555   17571 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:56.039588   17571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:00:56.039608   17571 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:56.039615   17571 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:56.039916   17571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:00:56.791785   17571 main.go:141] libmachine: Creating SSH key...
	I0328 12:00:56.836847   17571 main.go:141] libmachine: Creating Disk image...
	I0328 12:00:56.836857   17571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:00:56.837049   17571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:56.850050   17571 main.go:141] libmachine: STDOUT: 
	I0328 12:00:56.850079   17571 main.go:141] libmachine: STDERR: 
	I0328 12:00:56.850150   17571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2 +20000M
	I0328 12:00:56.861874   17571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:00:56.861910   17571 main.go:141] libmachine: STDERR: 
	I0328 12:00:56.861924   17571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:56.861927   17571 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:00:56.861980   17571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:30:cd:38:d9:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/offline-docker-984000/disk.qcow2
	I0328 12:00:56.863722   17571 main.go:141] libmachine: STDOUT: 
	I0328 12:00:56.863739   17571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:00:56.863754   17571 client.go:171] duration metric: took 824.33175ms to LocalClient.Create
	I0328 12:00:58.865884   17571 start.go:128] duration metric: took 2.899612209s to createHost
	I0328 12:00:58.865910   17571 start.go:83] releasing machines lock for "offline-docker-984000", held for 2.899728958s
	W0328 12:00:58.866008   17571 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:58.876064   17571 out.go:177] 
	W0328 12:00:58.892288   17571 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:00:58.892317   17571 out.go:239] * 
	* 
	W0328 12:00:58.892822   17571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:00:58.906063   17571 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-28 12:00:58.915985 -0700 PDT m=+783.676794501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-984000 -n offline-docker-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-984000 -n offline-docker-984000: exit status 7 (34.863209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-984000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-984000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-984000
--- FAIL: TestOffline (10.59s)

                                                
                                    
x
+
TestAddons/Setup (11.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-925000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-925000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (11.061465667s)

                                                
                                                
-- stdout --
	* [addons-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-925000" primary control-plane node in "addons-925000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:49:17.829425   15981 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:49:17.829540   15981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:49:17.829544   15981 out.go:304] Setting ErrFile to fd 2...
	I0328 11:49:17.829546   15981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:49:17.829658   15981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:49:17.830783   15981 out.go:298] Setting JSON to false
	I0328 11:49:17.846938   15981 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10129,"bootTime":1711641628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:49:17.847000   15981 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:49:17.852414   15981 out.go:177] * [addons-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:49:17.862480   15981 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:49:17.859543   15981 notify.go:220] Checking for updates...
	I0328 11:49:17.870560   15981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:49:17.877487   15981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:49:17.881524   15981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:49:17.884466   15981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:49:17.887524   15981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:49:17.891735   15981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:49:17.896478   15981 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 11:49:17.903506   15981 start.go:297] selected driver: qemu2
	I0328 11:49:17.903513   15981 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:49:17.903521   15981 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:49:17.906011   15981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:49:17.909513   15981 out.go:177] * Automatically selected the socket_vmnet network
	I0328 11:49:17.913627   15981 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:49:17.913676   15981 cni.go:84] Creating CNI manager for ""
	I0328 11:49:17.913685   15981 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 11:49:17.913690   15981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 11:49:17.913736   15981 start.go:340] cluster config:
	{Name:addons-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:49:17.918644   15981 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:49:17.924448   15981 out.go:177] * Starting "addons-925000" primary control-plane node in "addons-925000" cluster
	I0328 11:49:17.928526   15981 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:49:17.928542   15981 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:49:17.928551   15981 cache.go:56] Caching tarball of preloaded images
	I0328 11:49:17.928607   15981 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:49:17.928613   15981 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:49:17.928892   15981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/addons-925000/config.json ...
	I0328 11:49:17.928903   15981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/addons-925000/config.json: {Name:mk59d78c20f2457f48fbce78bd0409abf9eed6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:49:17.929121   15981 start.go:360] acquireMachinesLock for addons-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:49:17.929292   15981 start.go:364] duration metric: took 163.917µs to acquireMachinesLock for "addons-925000"
	I0328 11:49:17.929307   15981 start.go:93] Provisioning new machine with config: &{Name:addons-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:49:17.929338   15981 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:49:17.934527   15981 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0328 11:49:17.953323   15981 start.go:159] libmachine.API.Create for "addons-925000" (driver="qemu2")
	I0328 11:49:17.953354   15981 client.go:168] LocalClient.Create starting
	I0328 11:49:17.953472   15981 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:49:18.118473   15981 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:49:18.282476   15981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:49:19.045968   15981 main.go:141] libmachine: Creating SSH key...
	I0328 11:49:19.239204   15981 main.go:141] libmachine: Creating Disk image...
	I0328 11:49:19.239210   15981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:49:19.239429   15981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:19.252507   15981 main.go:141] libmachine: STDOUT: 
	I0328 11:49:19.252549   15981 main.go:141] libmachine: STDERR: 
	I0328 11:49:19.252608   15981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2 +20000M
	I0328 11:49:19.263173   15981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:49:19.263190   15981 main.go:141] libmachine: STDERR: 
	I0328 11:49:19.263210   15981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:19.263215   15981 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:49:19.263251   15981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:d7:13:d1:18:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:19.265018   15981 main.go:141] libmachine: STDOUT: 
	I0328 11:49:19.265034   15981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:49:19.265052   15981 client.go:171] duration metric: took 1.311694959s to LocalClient.Create
	I0328 11:49:21.267231   15981 start.go:128] duration metric: took 3.3378785s to createHost
	I0328 11:49:21.267357   15981 start.go:83] releasing machines lock for "addons-925000", held for 3.338018791s
	W0328 11:49:21.267434   15981 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:49:21.276835   15981 out.go:177] * Deleting "addons-925000" in qemu2 ...
	W0328 11:49:21.310436   15981 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:49:21.310469   15981 start.go:728] Will try again in 5 seconds ...
	I0328 11:49:26.312629   15981 start.go:360] acquireMachinesLock for addons-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:49:26.313034   15981 start.go:364] duration metric: took 316.75µs to acquireMachinesLock for "addons-925000"
	I0328 11:49:26.313142   15981 start.go:93] Provisioning new machine with config: &{Name:addons-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:49:26.313384   15981 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:49:26.326355   15981 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0328 11:49:26.374368   15981 start.go:159] libmachine.API.Create for "addons-925000" (driver="qemu2")
	I0328 11:49:26.374427   15981 client.go:168] LocalClient.Create starting
	I0328 11:49:26.374537   15981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:49:26.374588   15981 main.go:141] libmachine: Decoding PEM data...
	I0328 11:49:26.374605   15981 main.go:141] libmachine: Parsing certificate...
	I0328 11:49:26.374698   15981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:49:26.374740   15981 main.go:141] libmachine: Decoding PEM data...
	I0328 11:49:26.374755   15981 main.go:141] libmachine: Parsing certificate...
	I0328 11:49:26.375231   15981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:49:26.617775   15981 main.go:141] libmachine: Creating SSH key...
	I0328 11:49:26.785683   15981 main.go:141] libmachine: Creating Disk image...
	I0328 11:49:26.785691   15981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:49:26.785872   15981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:26.798245   15981 main.go:141] libmachine: STDOUT: 
	I0328 11:49:26.798271   15981 main.go:141] libmachine: STDERR: 
	I0328 11:49:26.798327   15981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2 +20000M
	I0328 11:49:26.809187   15981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:49:26.809224   15981 main.go:141] libmachine: STDERR: 
	I0328 11:49:26.809240   15981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:26.809246   15981 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:49:26.809281   15981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cd:1a:e9:72:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/addons-925000/disk.qcow2
	I0328 11:49:26.811128   15981 main.go:141] libmachine: STDOUT: 
	I0328 11:49:26.811144   15981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:49:26.811159   15981 client.go:171] duration metric: took 436.726334ms to LocalClient.Create
	I0328 11:49:28.813513   15981 start.go:128] duration metric: took 2.500047167s to createHost
	I0328 11:49:28.813702   15981 start.go:83] releasing machines lock for "addons-925000", held for 2.500632125s
	W0328 11:49:28.814257   15981 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:49:28.823860   15981 out.go:177] 
	W0328 11:49:28.833054   15981 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:49:28.833087   15981 out.go:239] * 
	* 
	W0328 11:49:28.835639   15981 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:49:28.844769   15981 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-925000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (11.06s)

                                                
                                    
x
+
TestCertOptions (12.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-243000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-243000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (12.07261225s)

                                                
                                                
-- stdout --
	* [cert-options-243000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-243000" primary control-plane node in "cert-options-243000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-243000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-243000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-243000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-243000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-243000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.167416ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-243000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-243000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-243000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-243000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-243000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-243000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.251875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-243000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-243000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-243000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-243000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-243000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-28 12:01:33.964687 -0700 PDT m=+818.725083876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-243000 -n cert-options-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-243000 -n cert-options-243000: exit status 7 (31.824792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-243000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-243000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-243000
--- FAIL: TestCertOptions (12.37s)

                                                
                                    
x
+
TestCertExpiration (197.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.330793167s)

                                                
                                                
-- stdout --
	* [cert-expiration-447000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-447000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.191384125s)

                                                
                                                
-- stdout --
	* [cert-expiration-447000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-447000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-28 12:04:36.611984 -0700 PDT m=+1001.370228251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-447000 -n cert-expiration-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-447000 -n cert-expiration-447000: exit status 7 (43.988167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-447000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-447000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-447000
--- FAIL: TestCertExpiration (197.67s)

                                                
                                    
x
+
TestDockerFlags (12.58s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-848000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-848000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.161862666s)

                                                
                                                
-- stdout --
	* [docker-flags-848000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-848000" primary control-plane node in "docker-flags-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:01:09.188914   17771 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:01:09.189044   17771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:01:09.189048   17771 out.go:304] Setting ErrFile to fd 2...
	I0328 12:01:09.189050   17771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:01:09.189176   17771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:01:09.190283   17771 out.go:298] Setting JSON to false
	I0328 12:01:09.206788   17771 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10841,"bootTime":1711641628,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:01:09.206855   17771 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:01:09.213775   17771 out.go:177] * [docker-flags-848000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:01:09.221737   17771 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:01:09.225664   17771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:01:09.221857   17771 notify.go:220] Checking for updates...
	I0328 12:01:09.232699   17771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:01:09.235629   17771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:01:09.238682   17771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:01:09.241690   17771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:01:09.244924   17771 config.go:182] Loaded profile config "force-systemd-flag-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:01:09.244983   17771 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:01:09.245040   17771 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:01:09.249623   17771 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:01:09.255648   17771 start.go:297] selected driver: qemu2
	I0328 12:01:09.255653   17771 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:01:09.255660   17771 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:01:09.257676   17771 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:01:09.261753   17771 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:01:09.264695   17771 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0328 12:01:09.264733   17771 cni.go:84] Creating CNI manager for ""
	I0328 12:01:09.264741   17771 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:01:09.264750   17771 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:01:09.264785   17771 start.go:340] cluster config:
	{Name:docker-flags-848000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:01:09.269087   17771 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:01:09.275679   17771 out.go:177] * Starting "docker-flags-848000" primary control-plane node in "docker-flags-848000" cluster
	I0328 12:01:09.279537   17771 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:01:09.279563   17771 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:01:09.279579   17771 cache.go:56] Caching tarball of preloaded images
	I0328 12:01:09.279668   17771 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:01:09.279673   17771 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:01:09.279751   17771 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/docker-flags-848000/config.json ...
	I0328 12:01:09.279764   17771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/docker-flags-848000/config.json: {Name:mk4cc8e6aa4ea8e1e24b4d46339c2c300dc575ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:01:09.279940   17771 start.go:360] acquireMachinesLock for docker-flags-848000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:01:11.284811   17771 start.go:364] duration metric: took 2.004801125s to acquireMachinesLock for "docker-flags-848000"
	I0328 12:01:11.284937   17771 start.go:93] Provisioning new machine with config: &{Name:docker-flags-848000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:01:11.285170   17771 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:01:11.295821   17771 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:01:11.344598   17771 start.go:159] libmachine.API.Create for "docker-flags-848000" (driver="qemu2")
	I0328 12:01:11.344641   17771 client.go:168] LocalClient.Create starting
	I0328 12:01:11.344801   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:01:11.344863   17771 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:11.344887   17771 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:11.344970   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:01:11.345018   17771 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:11.345034   17771 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:11.345671   17771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:01:11.505449   17771 main.go:141] libmachine: Creating SSH key...
	I0328 12:01:11.590843   17771 main.go:141] libmachine: Creating Disk image...
	I0328 12:01:11.590849   17771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:01:11.591068   17771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:11.603813   17771 main.go:141] libmachine: STDOUT: 
	I0328 12:01:11.603838   17771 main.go:141] libmachine: STDERR: 
	I0328 12:01:11.603893   17771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2 +20000M
	I0328 12:01:11.614665   17771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:01:11.614700   17771 main.go:141] libmachine: STDERR: 
	I0328 12:01:11.614713   17771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:11.614718   17771 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:01:11.614746   17771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ed:65:7b:da:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:11.616525   17771 main.go:141] libmachine: STDOUT: 
	I0328 12:01:11.616551   17771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:01:11.616573   17771 client.go:171] duration metric: took 271.920875ms to LocalClient.Create
	I0328 12:01:13.618777   17771 start.go:128] duration metric: took 2.333547459s to createHost
	I0328 12:01:13.618863   17771 start.go:83] releasing machines lock for "docker-flags-848000", held for 2.333989584s
	W0328 12:01:13.618910   17771 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:13.626213   17771 out.go:177] * Deleting "docker-flags-848000" in qemu2 ...
	W0328 12:01:13.664805   17771 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:13.664849   17771 start.go:728] Will try again in 5 seconds ...
	I0328 12:01:18.665673   17771 start.go:360] acquireMachinesLock for docker-flags-848000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:01:18.665967   17771 start.go:364] duration metric: took 228.25µs to acquireMachinesLock for "docker-flags-848000"
	I0328 12:01:18.666079   17771 start.go:93] Provisioning new machine with config: &{Name:docker-flags-848000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-848000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:01:18.666340   17771 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:01:18.676874   17771 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:01:18.725652   17771 start.go:159] libmachine.API.Create for "docker-flags-848000" (driver="qemu2")
	I0328 12:01:18.725708   17771 client.go:168] LocalClient.Create starting
	I0328 12:01:18.725788   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:01:18.725834   17771 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:18.725853   17771 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:18.725925   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:01:18.725952   17771 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:18.725965   17771 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:18.726473   17771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:01:19.088857   17771 main.go:141] libmachine: Creating SSH key...
	I0328 12:01:19.252004   17771 main.go:141] libmachine: Creating Disk image...
	I0328 12:01:19.252011   17771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:01:19.252157   17771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:19.264320   17771 main.go:141] libmachine: STDOUT: 
	I0328 12:01:19.264340   17771 main.go:141] libmachine: STDERR: 
	I0328 12:01:19.264388   17771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2 +20000M
	I0328 12:01:19.275165   17771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:01:19.275182   17771 main.go:141] libmachine: STDERR: 
	I0328 12:01:19.275191   17771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:19.275195   17771 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:01:19.275223   17771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0f:81:55:82:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/docker-flags-848000/disk.qcow2
	I0328 12:01:19.276954   17771 main.go:141] libmachine: STDOUT: 
	I0328 12:01:19.276972   17771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:01:19.276985   17771 client.go:171] duration metric: took 551.265916ms to LocalClient.Create
	I0328 12:01:21.279186   17771 start.go:128] duration metric: took 2.612764s to createHost
	I0328 12:01:21.279266   17771 start.go:83] releasing machines lock for "docker-flags-848000", held for 2.61324225s
	W0328 12:01:21.279531   17771 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:21.294095   17771 out.go:177] 
	W0328 12:01:21.299341   17771 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:01:21.299373   17771 out.go:239] * 
	* 
	W0328 12:01:21.301377   17771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:01:21.311038   17771 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-848000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-848000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-848000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (115.098542ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-848000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-848000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-848000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-848000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-848000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-848000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-848000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-848000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (103.922458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-848000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-848000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-848000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-848000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-848000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-28 12:01:21.541592 -0700 PDT m=+806.302134835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-848000 -n docker-flags-848000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-848000 -n docker-flags-848000: exit status 7 (37.269041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-848000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-848000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-848000
--- FAIL: TestDockerFlags (12.58s)

                                                
                                    
x
+
TestForceSystemdFlag (12.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-641000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-641000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.606839208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-641000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-641000" primary control-plane node in "force-systemd-flag-641000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-641000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:01:07.098784   17754 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:01:07.098912   17754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:01:07.098915   17754 out.go:304] Setting ErrFile to fd 2...
	I0328 12:01:07.098917   17754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:01:07.099046   17754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:01:07.100074   17754 out.go:298] Setting JSON to false
	I0328 12:01:07.116102   17754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10839,"bootTime":1711641628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:01:07.116159   17754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:01:07.122079   17754 out.go:177] * [force-systemd-flag-641000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:01:07.129975   17754 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:01:07.135010   17754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:01:07.130064   17754 notify.go:220] Checking for updates...
	I0328 12:01:07.142927   17754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:01:07.145967   17754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:01:07.147447   17754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:01:07.150967   17754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:01:07.154342   17754 config.go:182] Loaded profile config "force-systemd-env-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:01:07.154411   17754 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:01:07.154461   17754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:01:07.158862   17754 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:01:07.165961   17754 start.go:297] selected driver: qemu2
	I0328 12:01:07.165966   17754 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:01:07.165972   17754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:01:07.168180   17754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:01:07.170982   17754 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:01:07.174054   17754 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 12:01:07.174090   17754 cni.go:84] Creating CNI manager for ""
	I0328 12:01:07.174098   17754 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:01:07.174102   17754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:01:07.174145   17754 start.go:340] cluster config:
	{Name:force-systemd-flag-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:01:07.178870   17754 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:01:07.186928   17754 out.go:177] * Starting "force-systemd-flag-641000" primary control-plane node in "force-systemd-flag-641000" cluster
	I0328 12:01:07.190978   17754 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:01:07.191000   17754 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:01:07.191014   17754 cache.go:56] Caching tarball of preloaded images
	I0328 12:01:07.191097   17754 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:01:07.191103   17754 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:01:07.191173   17754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/force-systemd-flag-641000/config.json ...
	I0328 12:01:07.191185   17754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/force-systemd-flag-641000/config.json: {Name:mkb9c78d45e62876354d0841fb629f039cac37ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:01:07.191562   17754 start.go:360] acquireMachinesLock for force-systemd-flag-641000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:01:08.747902   17754 start.go:364] duration metric: took 1.556267416s to acquireMachinesLock for "force-systemd-flag-641000"
	I0328 12:01:08.748121   17754 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:01:08.748368   17754 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:01:08.758503   17754 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:01:08.809531   17754 start.go:159] libmachine.API.Create for "force-systemd-flag-641000" (driver="qemu2")
	I0328 12:01:08.809572   17754 client.go:168] LocalClient.Create starting
	I0328 12:01:08.809715   17754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:01:08.809772   17754 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:08.809792   17754 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:08.809870   17754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:01:08.809914   17754 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:08.809935   17754 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:08.810579   17754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:01:09.127329   17754 main.go:141] libmachine: Creating SSH key...
	I0328 12:01:09.253289   17754 main.go:141] libmachine: Creating Disk image...
	I0328 12:01:09.253296   17754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:01:09.253441   17754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:09.266594   17754 main.go:141] libmachine: STDOUT: 
	I0328 12:01:09.266615   17754 main.go:141] libmachine: STDERR: 
	I0328 12:01:09.266677   17754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2 +20000M
	I0328 12:01:09.280276   17754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:01:09.280305   17754 main.go:141] libmachine: STDERR: 
	I0328 12:01:09.280321   17754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:09.280326   17754 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:01:09.280355   17754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:3c:4c:19:c9:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:09.282146   17754 main.go:141] libmachine: STDOUT: 
	I0328 12:01:09.282182   17754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:01:09.282197   17754 client.go:171] duration metric: took 472.595625ms to LocalClient.Create
	I0328 12:01:11.284379   17754 start.go:128] duration metric: took 2.535954291s to createHost
	I0328 12:01:11.284433   17754 start.go:83] releasing machines lock for "force-systemd-flag-641000", held for 2.53646175s
	W0328 12:01:11.284483   17754 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:11.309099   17754 out.go:177] * Deleting "force-systemd-flag-641000" in qemu2 ...
	W0328 12:01:11.330415   17754 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:11.330433   17754 start.go:728] Will try again in 5 seconds ...
	I0328 12:01:16.332720   17754 start.go:360] acquireMachinesLock for force-systemd-flag-641000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:01:16.333049   17754 start.go:364] duration metric: took 250.625µs to acquireMachinesLock for "force-systemd-flag-641000"
	I0328 12:01:16.333150   17754 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:01:16.333352   17754 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:01:16.339139   17754 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:01:16.386052   17754 start.go:159] libmachine.API.Create for "force-systemd-flag-641000" (driver="qemu2")
	I0328 12:01:16.386098   17754 client.go:168] LocalClient.Create starting
	I0328 12:01:16.386237   17754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:01:16.386305   17754 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:16.386320   17754 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:16.386377   17754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:01:16.386421   17754 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:16.386440   17754 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:16.386943   17754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:01:16.546555   17754 main.go:141] libmachine: Creating SSH key...
	I0328 12:01:16.601800   17754 main.go:141] libmachine: Creating Disk image...
	I0328 12:01:16.601804   17754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:01:16.601973   17754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:16.614069   17754 main.go:141] libmachine: STDOUT: 
	I0328 12:01:16.614168   17754 main.go:141] libmachine: STDERR: 
	I0328 12:01:16.614225   17754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2 +20000M
	I0328 12:01:16.625039   17754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:01:16.625115   17754 main.go:141] libmachine: STDERR: 
	I0328 12:01:16.625130   17754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:16.625134   17754 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:01:16.625164   17754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f9:32:4f:ba:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-flag-641000/disk.qcow2
	I0328 12:01:16.626897   17754 main.go:141] libmachine: STDOUT: 
	I0328 12:01:16.627005   17754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:01:16.627019   17754 client.go:171] duration metric: took 240.91375ms to LocalClient.Create
	I0328 12:01:18.628865   17754 start.go:128] duration metric: took 2.29543575s to createHost
	I0328 12:01:18.628954   17754 start.go:83] releasing machines lock for "force-systemd-flag-641000", held for 2.295858709s
	W0328 12:01:18.629302   17754 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:18.638677   17754 out.go:177] 
	W0328 12:01:18.646065   17754 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:01:18.646097   17754 out.go:239] * 
	* 
	W0328 12:01:18.648275   17754 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:01:18.657923   17754 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-641000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-641000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-641000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (111.130667ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-641000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-641000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-641000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-28 12:01:18.79043 -0700 PDT m=+803.551004876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-641000 -n force-systemd-flag-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-641000 -n force-systemd-flag-641000: exit status 7 (42.276875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-641000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-641000
--- FAIL: TestForceSystemdFlag (12.01s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-080000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-080000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.731669041s)

                                                
                                                
-- stdout --
	* [force-systemd-env-080000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-080000" primary control-plane node in "force-systemd-env-080000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-080000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:00:59.087232   17713 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:00:59.087369   17713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:59.087373   17713 out.go:304] Setting ErrFile to fd 2...
	I0328 12:00:59.087378   17713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:59.087502   17713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:00:59.088433   17713 out.go:298] Setting JSON to false
	I0328 12:00:59.104595   17713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10831,"bootTime":1711641628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:00:59.104666   17713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:00:59.110082   17713 out.go:177] * [force-systemd-env-080000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:00:59.117055   17713 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:00:59.121044   17713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:00:59.117104   17713 notify.go:220] Checking for updates...
	I0328 12:00:59.124128   17713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:00:59.127024   17713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:00:59.130075   17713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:00:59.133072   17713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0328 12:00:59.136298   17713 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:00:59.136350   17713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:00:59.139984   17713 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:00:59.147041   17713 start.go:297] selected driver: qemu2
	I0328 12:00:59.147048   17713 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:00:59.147056   17713 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:00:59.149245   17713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:00:59.152050   17713 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:00:59.155158   17713 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 12:00:59.155208   17713 cni.go:84] Creating CNI manager for ""
	I0328 12:00:59.155216   17713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:00:59.155219   17713 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:00:59.155250   17713 start.go:340] cluster config:
	{Name:force-systemd-env-080000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:00:59.159797   17713 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:59.167051   17713 out.go:177] * Starting "force-systemd-env-080000" primary control-plane node in "force-systemd-env-080000" cluster
	I0328 12:00:59.170863   17713 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:00:59.170884   17713 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:00:59.170902   17713 cache.go:56] Caching tarball of preloaded images
	I0328 12:00:59.170956   17713 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:00:59.170962   17713 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:00:59.171030   17713 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/force-systemd-env-080000/config.json ...
	I0328 12:00:59.171045   17713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/force-systemd-env-080000/config.json: {Name:mk03b460501e69ae5066ce1221398dde10f27c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:00:59.171264   17713 start.go:360] acquireMachinesLock for force-systemd-env-080000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:00:59.171296   17713 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "force-systemd-env-080000"
	I0328 12:00:59.171309   17713 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:00:59.171332   17713 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:00:59.179928   17713 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:00:59.197105   17713 start.go:159] libmachine.API.Create for "force-systemd-env-080000" (driver="qemu2")
	I0328 12:00:59.197137   17713 client.go:168] LocalClient.Create starting
	I0328 12:00:59.197203   17713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:00:59.197236   17713 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:59.197248   17713 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:59.197297   17713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:00:59.197319   17713 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:59.197328   17713 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:59.197706   17713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:00:59.345062   17713 main.go:141] libmachine: Creating SSH key...
	I0328 12:00:59.381050   17713 main.go:141] libmachine: Creating Disk image...
	I0328 12:00:59.381058   17713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:00:59.381242   17713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:00:59.393767   17713 main.go:141] libmachine: STDOUT: 
	I0328 12:00:59.393790   17713 main.go:141] libmachine: STDERR: 
	I0328 12:00:59.393872   17713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2 +20000M
	I0328 12:00:59.404585   17713 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:00:59.404601   17713 main.go:141] libmachine: STDERR: 
	I0328 12:00:59.404621   17713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:00:59.404625   17713 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:00:59.404664   17713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:55:c9:c9:76:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:00:59.406351   17713 main.go:141] libmachine: STDOUT: 
	I0328 12:00:59.406374   17713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:00:59.406391   17713 client.go:171] duration metric: took 209.247541ms to LocalClient.Create
	I0328 12:01:01.408655   17713 start.go:128] duration metric: took 2.237265416s to createHost
	I0328 12:01:01.408738   17713 start.go:83] releasing machines lock for "force-systemd-env-080000", held for 2.237407417s
	W0328 12:01:01.408792   17713 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:01.427199   17713 out.go:177] * Deleting "force-systemd-env-080000" in qemu2 ...
	W0328 12:01:01.454180   17713 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:01.454224   17713 start.go:728] Will try again in 5 seconds ...
	I0328 12:01:06.456476   17713 start.go:360] acquireMachinesLock for force-systemd-env-080000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:01:06.456998   17713 start.go:364] duration metric: took 431.917µs to acquireMachinesLock for "force-systemd-env-080000"
	I0328 12:01:06.457130   17713 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:01:06.457366   17713 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:01:06.468043   17713 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0328 12:01:06.516952   17713 start.go:159] libmachine.API.Create for "force-systemd-env-080000" (driver="qemu2")
	I0328 12:01:06.517002   17713 client.go:168] LocalClient.Create starting
	I0328 12:01:06.517125   17713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:01:06.517197   17713 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:06.517214   17713 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:06.517273   17713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:01:06.517313   17713 main.go:141] libmachine: Decoding PEM data...
	I0328 12:01:06.517325   17713 main.go:141] libmachine: Parsing certificate...
	I0328 12:01:06.517836   17713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:01:06.677157   17713 main.go:141] libmachine: Creating SSH key...
	I0328 12:01:06.718724   17713 main.go:141] libmachine: Creating Disk image...
	I0328 12:01:06.718730   17713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:01:06.718901   17713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:01:06.731777   17713 main.go:141] libmachine: STDOUT: 
	I0328 12:01:06.731801   17713 main.go:141] libmachine: STDERR: 
	I0328 12:01:06.731888   17713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2 +20000M
	I0328 12:01:06.743285   17713 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:01:06.743304   17713 main.go:141] libmachine: STDERR: 
	I0328 12:01:06.743317   17713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:01:06.743324   17713 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:01:06.743368   17713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:32:13:26:50:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/force-systemd-env-080000/disk.qcow2
	I0328 12:01:06.745282   17713 main.go:141] libmachine: STDOUT: 
	I0328 12:01:06.745308   17713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:01:06.745323   17713 client.go:171] duration metric: took 228.313375ms to LocalClient.Create
	I0328 12:01:08.747669   17713 start.go:128] duration metric: took 2.290134334s to createHost
	I0328 12:01:08.747744   17713 start.go:83] releasing machines lock for "force-systemd-env-080000", held for 2.290691834s
	W0328 12:01:08.748101   17713 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:01:08.761736   17713 out.go:177] 
	W0328 12:01:08.765845   17713 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:01:08.765890   17713 out.go:239] * 
	* 
	W0328 12:01:08.768381   17713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:01:08.778596   17713 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-080000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-080000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-080000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (110.928542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-080000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-080000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-080000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-28 12:01:08.901328 -0700 PDT m=+793.662019460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-080000 -n force-systemd-env-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-080000 -n force-systemd-env-080000: exit status 7 (39.928666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-080000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-080000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-080000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestErrorSpam/setup (9.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-796000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-796000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 --driver=qemu2 : exit status 80 (9.828409625s)

                                                
                                                
-- stdout --
	* [nospam-796000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-796000" primary control-plane node in "nospam-796000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-796000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-796000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-796000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=17877
- KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-796000" primary control-plane node in "nospam-796000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-796000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.83s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.077829459s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=17877
- KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-908000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52968 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (68.684333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.15s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8: exit status 80 (5.191421s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:49:59.280425   16138 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:49:59.280571   16138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:49:59.280574   16138 out.go:304] Setting ErrFile to fd 2...
	I0328 11:49:59.280576   16138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:49:59.280692   16138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:49:59.281631   16138 out.go:298] Setting JSON to false
	I0328 11:49:59.297608   16138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10171,"bootTime":1711641628,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:49:59.297666   16138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:49:59.302949   16138 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:49:59.310854   16138 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:49:59.310890   16138 notify.go:220] Checking for updates...
	I0328 11:49:59.314896   16138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:49:59.318733   16138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:49:59.322892   16138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:49:59.326831   16138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:49:59.329825   16138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:49:59.333192   16138 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:49:59.333250   16138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:49:59.336937   16138 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:49:59.343848   16138 start.go:297] selected driver: qemu2
	I0328 11:49:59.343853   16138 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:49:59.343904   16138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:49:59.346157   16138 cni.go:84] Creating CNI manager for ""
	I0328 11:49:59.346174   16138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 11:49:59.346215   16138 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:49:59.350613   16138 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:49:59.358872   16138 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0328 11:49:59.362906   16138 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:49:59.362928   16138 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:49:59.362943   16138 cache.go:56] Caching tarball of preloaded images
	I0328 11:49:59.363005   16138 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:49:59.363011   16138 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:49:59.363072   16138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/functional-908000/config.json ...
	I0328 11:49:59.363558   16138 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:49:59.363585   16138 start.go:364] duration metric: took 21.583µs to acquireMachinesLock for "functional-908000"
	I0328 11:49:59.363594   16138 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:49:59.363600   16138 fix.go:54] fixHost starting: 
	I0328 11:49:59.363720   16138 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0328 11:49:59.363731   16138 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:49:59.367852   16138 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0328 11:49:59.375894   16138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
	I0328 11:49:59.377908   16138 main.go:141] libmachine: STDOUT: 
	I0328 11:49:59.377930   16138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:49:59.377960   16138 fix.go:56] duration metric: took 14.359792ms for fixHost
	I0328 11:49:59.377965   16138 start.go:83] releasing machines lock for "functional-908000", held for 14.376334ms
	W0328 11:49:59.377971   16138 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:49:59.378020   16138 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:49:59.378025   16138 start.go:728] Will try again in 5 seconds ...
	I0328 11:50:04.380155   16138 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:50:04.380569   16138 start.go:364] duration metric: took 324.75µs to acquireMachinesLock for "functional-908000"
	I0328 11:50:04.380714   16138 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:50:04.380735   16138 fix.go:54] fixHost starting: 
	I0328 11:50:04.381444   16138 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0328 11:50:04.381469   16138 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:50:04.390858   16138 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0328 11:50:04.393972   16138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
	I0328 11:50:04.404169   16138 main.go:141] libmachine: STDOUT: 
	I0328 11:50:04.404248   16138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:50:04.404350   16138 fix.go:56] duration metric: took 23.618291ms for fixHost
	I0328 11:50:04.404368   16138 start.go:83] releasing machines lock for "functional-908000", held for 23.772292ms
	W0328 11:50:04.404523   16138 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:50:04.411881   16138 out.go:177] 
	W0328 11:50:04.415902   16138 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:50:04.415942   16138 out.go:239] * 
	* 
	W0328 11:50:04.418810   16138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:50:04.425852   16138 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.1931995s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (69.71475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.398958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-908000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.874083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-908000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-908000 get po -A: exit status 1 (26.416541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-908000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-908000\n"*: args "kubectl --context functional-908000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-908000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.762125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images: exit status 83 (44.735292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (44.866666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.819542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.714291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods: exit status 1 (656.146292ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (33.803583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-908000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-908000 get pods: exit status 1 (902.148083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.312875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.183336584s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.183861583s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (69.5025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.774875ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.0945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 logs: exit status 83 (81.772958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |                     |
	|         | -p download-only-603000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
	|         | -p download-only-625000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
	|         | -p download-only-549000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| start   | --download-only -p                                                       | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | binary-mirror-643000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:52935                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-643000                                                  | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| addons  | enable dashboard -p                                                      | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | addons-925000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | addons-925000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-925000 --wait=true                                             | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-925000                                                         | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| start   | -p nospam-796000 -n=1 --memory=2250 --wait=false                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-796000                                                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
	| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | --context functional-908000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 11:50:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 11:50:13.933830   16220 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:50:13.933939   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:50:13.933941   16220 out.go:304] Setting ErrFile to fd 2...
	I0328 11:50:13.933943   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:50:13.934062   16220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:50:13.935099   16220 out.go:298] Setting JSON to false
	I0328 11:50:13.950851   16220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10185,"bootTime":1711641628,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:50:13.950910   16220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:50:13.956280   16220 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:50:13.964297   16220 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:50:13.968182   16220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:50:13.964334   16220 notify.go:220] Checking for updates...
	I0328 11:50:13.976047   16220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:50:13.979224   16220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:50:13.982271   16220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:50:13.985285   16220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:50:13.988441   16220 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:50:13.988491   16220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:50:13.993218   16220 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:50:14.000209   16220 start.go:297] selected driver: qemu2
	I0328 11:50:14.000213   16220 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:50:14.000286   16220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:50:14.002562   16220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:50:14.002600   16220 cni.go:84] Creating CNI manager for ""
	I0328 11:50:14.002607   16220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 11:50:14.002641   16220 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:50:14.007052   16220 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:50:14.016290   16220 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0328 11:50:14.019317   16220 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:50:14.019328   16220 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:50:14.019338   16220 cache.go:56] Caching tarball of preloaded images
	I0328 11:50:14.019384   16220 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:50:14.019387   16220 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:50:14.019448   16220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/functional-908000/config.json ...
	I0328 11:50:14.019851   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:50:14.019882   16220 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "functional-908000"
	I0328 11:50:14.019889   16220 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:50:14.019894   16220 fix.go:54] fixHost starting: 
	I0328 11:50:14.020007   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0328 11:50:14.020015   16220 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:50:14.028228   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0328 11:50:14.031322   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
	I0328 11:50:14.033436   16220 main.go:141] libmachine: STDOUT: 
	I0328 11:50:14.033455   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:50:14.033485   16220 fix.go:56] duration metric: took 13.590708ms for fixHost
	I0328 11:50:14.033488   16220 start.go:83] releasing machines lock for "functional-908000", held for 13.604042ms
	W0328 11:50:14.033493   16220 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:50:14.033523   16220 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:50:14.033528   16220 start.go:728] Will try again in 5 seconds ...
	I0328 11:50:19.035628   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:50:19.035817   16220 start.go:364] duration metric: took 148.209µs to acquireMachinesLock for "functional-908000"
	I0328 11:50:19.035930   16220 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:50:19.035938   16220 fix.go:54] fixHost starting: 
	I0328 11:50:19.036409   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0328 11:50:19.036426   16220 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:50:19.040911   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0328 11:50:19.044081   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
	I0328 11:50:19.052333   16220 main.go:141] libmachine: STDOUT: 
	I0328 11:50:19.052366   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:50:19.052416   16220 fix.go:56] duration metric: took 16.47825ms for fixHost
	I0328 11:50:19.052429   16220 start.go:83] releasing machines lock for "functional-908000", held for 16.596375ms
	W0328 11:50:19.052602   16220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:50:19.059872   16220 out.go:177] 
	W0328 11:50:19.062932   16220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:50:19.062975   16220 out.go:239] * 
	W0328 11:50:19.066244   16220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:50:19.073899   16220 out.go:177] 
	
	
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-908000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |                     |
|         | -p download-only-603000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
|         | -p download-only-625000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
|         | -p download-only-549000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | binary-mirror-643000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52935                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-643000                                                  | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| addons  | enable dashboard -p                                                      | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | addons-925000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | addons-925000                                                            |                      |         |                |                     |                     |
| start   | -p addons-925000 --wait=true                                             | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-925000                                                         | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | -p nospam-796000 -n=1 --memory=2250 --wait=false                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-796000                                                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | --context functional-908000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/28 11:50:13
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0328 11:50:13.933830   16220 out.go:291] Setting OutFile to fd 1 ...
I0328 11:50:13.933939   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:13.933941   16220 out.go:304] Setting ErrFile to fd 2...
I0328 11:50:13.933943   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:13.934062   16220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:50:13.935099   16220 out.go:298] Setting JSON to false
I0328 11:50:13.950851   16220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10185,"bootTime":1711641628,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0328 11:50:13.950910   16220 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0328 11:50:13.956280   16220 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0328 11:50:13.964297   16220 out.go:177]   - MINIKUBE_LOCATION=17877
I0328 11:50:13.968182   16220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
I0328 11:50:13.964334   16220 notify.go:220] Checking for updates...
I0328 11:50:13.976047   16220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0328 11:50:13.979224   16220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0328 11:50:13.982271   16220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
I0328 11:50:13.985285   16220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0328 11:50:13.988441   16220 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:50:13.988491   16220 driver.go:392] Setting default libvirt URI to qemu:///system
I0328 11:50:13.993218   16220 out.go:177] * Using the qemu2 driver based on existing profile
I0328 11:50:14.000209   16220 start.go:297] selected driver: qemu2
I0328 11:50:14.000213   16220 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0328 11:50:14.000286   16220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0328 11:50:14.002562   16220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0328 11:50:14.002600   16220 cni.go:84] Creating CNI manager for ""
I0328 11:50:14.002607   16220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0328 11:50:14.002641   16220 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0328 11:50:14.007052   16220 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:50:14.016290   16220 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0328 11:50:14.019317   16220 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0328 11:50:14.019328   16220 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0328 11:50:14.019338   16220 cache.go:56] Caching tarball of preloaded images
I0328 11:50:14.019384   16220 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0328 11:50:14.019387   16220 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0328 11:50:14.019448   16220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/functional-908000/config.json ...
I0328 11:50:14.019851   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0328 11:50:14.019882   16220 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "functional-908000"
I0328 11:50:14.019889   16220 start.go:96] Skipping create...Using existing machine configuration
I0328 11:50:14.019894   16220 fix.go:54] fixHost starting: 
I0328 11:50:14.020007   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0328 11:50:14.020015   16220 fix.go:138] unexpected machine state, will restart: <nil>
I0328 11:50:14.028228   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0328 11:50:14.031322   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
I0328 11:50:14.033436   16220 main.go:141] libmachine: STDOUT: 
I0328 11:50:14.033455   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0328 11:50:14.033485   16220 fix.go:56] duration metric: took 13.590708ms for fixHost
I0328 11:50:14.033488   16220 start.go:83] releasing machines lock for "functional-908000", held for 13.604042ms
W0328 11:50:14.033493   16220 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0328 11:50:14.033523   16220 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0328 11:50:14.033528   16220 start.go:728] Will try again in 5 seconds ...
I0328 11:50:19.035628   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0328 11:50:19.035817   16220 start.go:364] duration metric: took 148.209µs to acquireMachinesLock for "functional-908000"
I0328 11:50:19.035930   16220 start.go:96] Skipping create...Using existing machine configuration
I0328 11:50:19.035938   16220 fix.go:54] fixHost starting: 
I0328 11:50:19.036409   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0328 11:50:19.036426   16220 fix.go:138] unexpected machine state, will restart: <nil>
I0328 11:50:19.040911   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0328 11:50:19.044081   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
I0328 11:50:19.052333   16220 main.go:141] libmachine: STDOUT: 
I0328 11:50:19.052366   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0328 11:50:19.052416   16220 fix.go:56] duration metric: took 16.47825ms for fixHost
I0328 11:50:19.052429   16220 start.go:83] releasing machines lock for "functional-908000", held for 16.596375ms
W0328 11:50:19.052602   16220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0328 11:50:19.059872   16220 out.go:177] 
W0328 11:50:19.062932   16220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0328 11:50:19.062975   16220 out.go:239] * 
W0328 11:50:19.066244   16220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0328 11:50:19.073899   16220 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd28995195/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |                     |
|         | -p download-only-603000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
|         | -p download-only-625000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
|         | -p download-only-549000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-603000                                                  | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | binary-mirror-643000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52935                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-643000                                                  | binary-mirror-643000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| addons  | enable dashboard -p                                                      | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | addons-925000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | addons-925000                                                            |                      |         |                |                     |                     |
| start   | -p addons-925000 --wait=true                                             | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-925000                                                         | addons-925000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | -p nospam-796000 -n=1 --memory=2250 --wait=false                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-796000 --log_dir                                                  | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-796000                                                         | nospam-796000        | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT | 28 Mar 24 11:49 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:49 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT | 28 Mar 24 11:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | --context functional-908000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/28 11:50:13
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0328 11:50:13.933830   16220 out.go:291] Setting OutFile to fd 1 ...
I0328 11:50:13.933939   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:13.933941   16220 out.go:304] Setting ErrFile to fd 2...
I0328 11:50:13.933943   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:13.934062   16220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:50:13.935099   16220 out.go:298] Setting JSON to false
I0328 11:50:13.950851   16220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10185,"bootTime":1711641628,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0328 11:50:13.950910   16220 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0328 11:50:13.956280   16220 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0328 11:50:13.964297   16220 out.go:177]   - MINIKUBE_LOCATION=17877
I0328 11:50:13.968182   16220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
I0328 11:50:13.964334   16220 notify.go:220] Checking for updates...
I0328 11:50:13.976047   16220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0328 11:50:13.979224   16220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0328 11:50:13.982271   16220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
I0328 11:50:13.985285   16220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0328 11:50:13.988441   16220 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:50:13.988491   16220 driver.go:392] Setting default libvirt URI to qemu:///system
I0328 11:50:13.993218   16220 out.go:177] * Using the qemu2 driver based on existing profile
I0328 11:50:14.000209   16220 start.go:297] selected driver: qemu2
I0328 11:50:14.000213   16220 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0328 11:50:14.000286   16220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0328 11:50:14.002562   16220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0328 11:50:14.002600   16220 cni.go:84] Creating CNI manager for ""
I0328 11:50:14.002607   16220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0328 11:50:14.002641   16220 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0328 11:50:14.007052   16220 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:50:14.016290   16220 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0328 11:50:14.019317   16220 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0328 11:50:14.019328   16220 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0328 11:50:14.019338   16220 cache.go:56] Caching tarball of preloaded images
I0328 11:50:14.019384   16220 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0328 11:50:14.019387   16220 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0328 11:50:14.019448   16220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/functional-908000/config.json ...
I0328 11:50:14.019851   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0328 11:50:14.019882   16220 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "functional-908000"
I0328 11:50:14.019889   16220 start.go:96] Skipping create...Using existing machine configuration
I0328 11:50:14.019894   16220 fix.go:54] fixHost starting: 
I0328 11:50:14.020007   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0328 11:50:14.020015   16220 fix.go:138] unexpected machine state, will restart: <nil>
I0328 11:50:14.028228   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0328 11:50:14.031322   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
I0328 11:50:14.033436   16220 main.go:141] libmachine: STDOUT: 
I0328 11:50:14.033455   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0328 11:50:14.033485   16220 fix.go:56] duration metric: took 13.590708ms for fixHost
I0328 11:50:14.033488   16220 start.go:83] releasing machines lock for "functional-908000", held for 13.604042ms
W0328 11:50:14.033493   16220 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0328 11:50:14.033523   16220 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0328 11:50:14.033528   16220 start.go:728] Will try again in 5 seconds ...
I0328 11:50:19.035628   16220 start.go:360] acquireMachinesLock for functional-908000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0328 11:50:19.035817   16220 start.go:364] duration metric: took 148.209µs to acquireMachinesLock for "functional-908000"
I0328 11:50:19.035930   16220 start.go:96] Skipping create...Using existing machine configuration
I0328 11:50:19.035938   16220 fix.go:54] fixHost starting: 
I0328 11:50:19.036409   16220 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0328 11:50:19.036426   16220 fix.go:138] unexpected machine state, will restart: <nil>
I0328 11:50:19.040911   16220 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0328 11:50:19.044081   16220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f2:99:62:a9:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/functional-908000/disk.qcow2
I0328 11:50:19.052333   16220 main.go:141] libmachine: STDOUT: 
I0328 11:50:19.052366   16220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0328 11:50:19.052416   16220 fix.go:56] duration metric: took 16.47825ms for fixHost
I0328 11:50:19.052429   16220 start.go:83] releasing machines lock for "functional-908000", held for 16.596375ms
W0328 11:50:19.052602   16220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0328 11:50:19.059872   16220 out.go:177] 
W0328 11:50:19.062932   16220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0328 11:50:19.062975   16220 out.go:239] * 
W0328 11:50:19.066244   16220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0328 11:50:19.073899   16220 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.268833ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stderr:
I0328 11:51:16.529555   16566 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:16.529928   16566 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.529932   16566 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:16.529935   16566 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.530097   16566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:16.530300   16566 mustload.go:65] Loading cluster: functional-908000
I0328 11:51:16.530475   16566 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:16.534034   16566 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0328 11:51:16.538063   16566 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (43.647875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status: exit status 7 (31.961416ms)

                                                
                                                
-- stdout --
	functional-908000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-908000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.796ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -o json: exit status 7 (31.828ms)

                                                
                                                
-- stdout --
	{"Name":"functional-908000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-908000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.93125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.698291ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-908000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-908000 describe po hello-node-connect: exit status 1 (26.381167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-908000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-908000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-908000 logs -l app=hello-node-connect: exit status 1 (26.25125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-908000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-908000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-908000 describe svc hello-node-connect: exit status 1 (26.832833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-908000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.145875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-908000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.761333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello": exit status 83 (49.809584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname": exit status 83 (50.902ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-908000"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.246625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (58.247667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.003ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd520847766/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd520847766/001/cp-test.txt: exit status 83 (43.398542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd520847766/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.054167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd520847766/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (49.720333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.90425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15784/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/15784/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/15784/hosts": exit status 83 (42.799542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/15784/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15784.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/15784.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/15784.pem": exit status 83 (43.016875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/15784.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/15784.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/15784.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15784.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/15784.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/15784.pem": exit status 83 (45.696916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/15784.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/15784.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/15784.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (46.652ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/157842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/157842.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/157842.pem": exit status 83 (42.783042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/157842.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/157842.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/157842.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/157842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/157842.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/157842.pem": exit status 83 (42.780209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/157842.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/157842.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/157842.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (44.701541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.847917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.153167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-908000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.435083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio": exit status 83 (40.381833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 version -o=json --components: exit status 83 (42.938541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:
I0328 11:51:16.944240   16581 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:16.944400   16581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.944404   16581 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:16.944406   16581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.944527   16581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:16.944961   16581 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:16.945022   16581 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
I0328 11:51:17.176297   16595 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:17.176442   16595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.176445   16595 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:17.176447   16595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.176580   16595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:17.176987   16595 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:17.177052   16595 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
I0328 11:51:17.138005   16593 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:17.138193   16593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.138196   16593 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:17.138198   16593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.138335   16593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:17.138775   16593 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:17.138841   16593 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
I0328 11:51:16.980553   16583 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:16.980695   16583 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.980698   16583 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:16.980701   16583 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:16.980831   16583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:16.981259   16583 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:16.981325   16583 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd: exit status 83 (42.850292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr:
I0328 11:51:17.061119   16589 out.go:291] Setting OutFile to fd 1 ...
I0328 11:51:17.061477   16589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.061481   16589 out.go:304] Setting ErrFile to fd 2...
I0328 11:51:17.061483   16589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:51:17.061599   16589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:51:17.062010   16589 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:17.062381   16589 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:51:17.062602   16589 build_images.go:133] succeeded building to: 
I0328 11:51:17.062606   16589 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "localhost/my-image:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000": exit status 1 (46.951ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (43.837292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:51:16.811059   16575 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:51:16.811605   16575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.811609   16575 out.go:304] Setting ErrFile to fd 2...
	I0328 11:51:16.811611   16575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.811740   16575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:51:16.811947   16575 mustload.go:65] Loading cluster: functional-908000
	I0328 11:51:16.812128   16575 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:51:16.816679   16575 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0328 11:51:16.819732   16575 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (44.604541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:51:16.899234   16579 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:51:16.899373   16579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.899376   16579 out.go:304] Setting ErrFile to fd 2...
	I0328 11:51:16.899378   16579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.899490   16579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:51:16.899724   16579 mustload.go:65] Loading cluster: functional-908000
	I0328 11:51:16.899901   16579 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:51:16.904672   16579 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0328 11:51:16.908677   16579 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (43.483083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:51:16.854834   16577 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:51:16.854971   16577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.854974   16577 out.go:304] Setting ErrFile to fd 2...
	I0328 11:51:16.854976   16577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.855104   16577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:51:16.855326   16577 mustload.go:65] Loading cluster: functional-908000
	I0328 11:51:16.855513   16577 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:51:16.859733   16577 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0328 11:51:16.863711   16577 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.908834ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list: exit status 83 (46.707709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-908000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list -o json: exit status 83 (44.721833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-908000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node: exit status 83 (42.965459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}: exit status 83 (42.748083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url: exit status 83 (41.899208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1565: failed to parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0328 11:50:22.050016   16344 out.go:291] Setting OutFile to fd 1 ...
I0328 11:50:22.050187   16344 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:22.050192   16344 out.go:304] Setting ErrFile to fd 2...
I0328 11:50:22.050195   16344 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:50:22.050328   16344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:50:22.050558   16344 mustload.go:65] Loading cluster: functional-908000
I0328 11:50:22.050769   16344 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:50:22.055566   16344 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0328 11:50:22.067495   16344 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
stdout: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 16345: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-908000": client config: context "functional-908000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-908000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-908000 get svc nginx-svc: exit status 1 (69.455584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-908000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr: (1.422374042s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr: (1.366962583s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.20861075s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-908000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 image load --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr: (1.220881584s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save gcr.io/google-containers/addon-resizer:functional-908000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.021656292s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-446000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-446000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.062225042s)

                                                
                                                
-- stdout --
	* [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:53:11.015619   16663 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:53:11.015728   16663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:53:11.015732   16663 out.go:304] Setting ErrFile to fd 2...
	I0328 11:53:11.015735   16663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:53:11.015859   16663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:53:11.016908   16663 out.go:298] Setting JSON to false
	I0328 11:53:11.032875   16663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10363,"bootTime":1711641628,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:53:11.032941   16663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:53:11.041754   16663 out.go:177] * [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:53:11.050399   16663 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:53:11.055348   16663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:53:11.050441   16663 notify.go:220] Checking for updates...
	I0328 11:53:11.061394   16663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:53:11.065321   16663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:53:11.068366   16663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:53:11.071399   16663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:53:11.074564   16663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:53:11.078444   16663 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 11:53:11.085349   16663 start.go:297] selected driver: qemu2
	I0328 11:53:11.085356   16663 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:53:11.085363   16663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:53:11.087699   16663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:53:11.091367   16663 out.go:177] * Automatically selected the socket_vmnet network
	I0328 11:53:11.095436   16663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:53:11.095478   16663 cni.go:84] Creating CNI manager for ""
	I0328 11:53:11.095484   16663 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0328 11:53:11.095489   16663 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 11:53:11.095520   16663 start.go:340] cluster config:
	{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:53:11.100350   16663 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:53:11.108357   16663 out.go:177] * Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	I0328 11:53:11.112307   16663 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:53:11.112324   16663 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:53:11.112336   16663 cache.go:56] Caching tarball of preloaded images
	I0328 11:53:11.112402   16663 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:53:11.112409   16663 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:53:11.112639   16663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/ha-446000/config.json ...
	I0328 11:53:11.112650   16663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/ha-446000/config.json: {Name:mkd9bf4f2343d97d73fce59b63f1da573a96bda6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:53:11.112878   16663 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:53:11.112911   16663 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "ha-446000"
	I0328 11:53:11.112925   16663 start.go:93] Provisioning new machine with config: &{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:53:11.112966   16663 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:53:11.117365   16663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 11:53:11.135004   16663 start.go:159] libmachine.API.Create for "ha-446000" (driver="qemu2")
	I0328 11:53:11.135032   16663 client.go:168] LocalClient.Create starting
	I0328 11:53:11.135095   16663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:53:11.135123   16663 main.go:141] libmachine: Decoding PEM data...
	I0328 11:53:11.135134   16663 main.go:141] libmachine: Parsing certificate...
	I0328 11:53:11.135178   16663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:53:11.135200   16663 main.go:141] libmachine: Decoding PEM data...
	I0328 11:53:11.135209   16663 main.go:141] libmachine: Parsing certificate...
	I0328 11:53:11.135583   16663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:53:11.279575   16663 main.go:141] libmachine: Creating SSH key...
	I0328 11:53:11.517833   16663 main.go:141] libmachine: Creating Disk image...
	I0328 11:53:11.517844   16663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:53:11.518080   16663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:11.530846   16663 main.go:141] libmachine: STDOUT: 
	I0328 11:53:11.530870   16663 main.go:141] libmachine: STDERR: 
	I0328 11:53:11.530940   16663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2 +20000M
	I0328 11:53:11.541804   16663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:53:11.541824   16663 main.go:141] libmachine: STDERR: 
	I0328 11:53:11.541840   16663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:11.541843   16663 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:53:11.541873   16663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:f6:30:00:10:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:11.543665   16663 main.go:141] libmachine: STDOUT: 
	I0328 11:53:11.543682   16663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:53:11.543702   16663 client.go:171] duration metric: took 408.660167ms to LocalClient.Create
	I0328 11:53:13.545936   16663 start.go:128] duration metric: took 2.432918584s to createHost
	I0328 11:53:13.546079   16663 start.go:83] releasing machines lock for "ha-446000", held for 2.433126125s
	W0328 11:53:13.546142   16663 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:53:13.563326   16663 out.go:177] * Deleting "ha-446000" in qemu2 ...
	W0328 11:53:13.593622   16663 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:53:13.593643   16663 start.go:728] Will try again in 5 seconds ...
	I0328 11:53:18.595927   16663 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:53:18.596340   16663 start.go:364] duration metric: took 289.791µs to acquireMachinesLock for "ha-446000"
	I0328 11:53:18.596462   16663 start.go:93] Provisioning new machine with config: &{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:53:18.596734   16663 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:53:18.607285   16663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 11:53:18.655152   16663 start.go:159] libmachine.API.Create for "ha-446000" (driver="qemu2")
	I0328 11:53:18.655198   16663 client.go:168] LocalClient.Create starting
	I0328 11:53:18.655299   16663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:53:18.655360   16663 main.go:141] libmachine: Decoding PEM data...
	I0328 11:53:18.655375   16663 main.go:141] libmachine: Parsing certificate...
	I0328 11:53:18.655430   16663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:53:18.655471   16663 main.go:141] libmachine: Decoding PEM data...
	I0328 11:53:18.655486   16663 main.go:141] libmachine: Parsing certificate...
	I0328 11:53:18.656159   16663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:53:18.812987   16663 main.go:141] libmachine: Creating SSH key...
	I0328 11:53:18.969278   16663 main.go:141] libmachine: Creating Disk image...
	I0328 11:53:18.969285   16663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:53:18.969493   16663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:18.982223   16663 main.go:141] libmachine: STDOUT: 
	I0328 11:53:18.982242   16663 main.go:141] libmachine: STDERR: 
	I0328 11:53:18.982292   16663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2 +20000M
	I0328 11:53:18.992908   16663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:53:18.992924   16663 main.go:141] libmachine: STDERR: 
	I0328 11:53:18.992941   16663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:18.992944   16663 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:53:18.992982   16663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:74:5f:15:22:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:53:18.994638   16663 main.go:141] libmachine: STDOUT: 
	I0328 11:53:18.994653   16663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:53:18.994666   16663 client.go:171] duration metric: took 339.459209ms to LocalClient.Create
	I0328 11:53:20.996869   16663 start.go:128] duration metric: took 2.400072291s to createHost
	I0328 11:53:20.996967   16663 start.go:83] releasing machines lock for "ha-446000", held for 2.400550125s
	W0328 11:53:20.997319   16663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:53:21.007980   16663 out.go:177] 
	W0328 11:53:21.017137   16663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:53:21.020939   16663 out.go:239] * 
	* 
	W0328 11:53:21.023642   16663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:53:21.032929   16663 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-446000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (69.215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.825375ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-446000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- rollout status deployment/busybox: exit status 1 (58.649958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.77375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.142083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.453042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.5105ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.079541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.765458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.940708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.056708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.373625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.438542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.929958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.091292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.0485ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.002708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.742542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.094417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-446000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.988708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-446000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (31.753833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-446000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-446000 -v=7 --alsologtostderr: exit status 83 (47.114375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:15.454731   16790 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:15.455249   16790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.455254   16790 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:15.455256   16790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.455414   16790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:15.455643   16790 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:15.455831   16790 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:15.460770   16790 out.go:177] * The control-plane node ha-446000 host is not running: state=Stopped
	I0328 11:55:15.465690   16790 out.go:177]   To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-446000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.234708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-446000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-446000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.554125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-446000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-446000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-446000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (31.961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-446000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-446000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.082709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status --output json -v=7 --alsologtostderr: exit status 7 (31.767667ms)

                                                
                                                
-- stdout --
	{"Name":"ha-446000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:15.697283   16803 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:15.697435   16803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.697438   16803 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:15.697444   16803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.697572   16803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:15.697694   16803 out.go:298] Setting JSON to true
	I0328 11:55:15.697705   16803 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:15.697764   16803 notify.go:220] Checking for updates...
	I0328 11:55:15.697910   16803 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:15.697916   16803 status.go:255] checking status of ha-446000 ...
	I0328 11:55:15.698114   16803 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:15.698118   16803 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:15.698120   16803 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-446000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.171042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.896ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:15.762582   16807 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:15.762988   16807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.762994   16807 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:15.762996   16807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.763168   16807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:15.763415   16807 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:15.763614   16807 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:15.767977   16807 out.go:177] 
	W0328 11:55:15.771871   16807 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0328 11:55:15.771875   16807 out.go:239] * 
	* 
	W0328 11:55:15.774601   16807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:55:15.778863   16807 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-446000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (31.986709ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:15.814142   16809 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:15.814302   16809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.814306   16809 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:15.814308   16809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.814438   16809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:15.814555   16809 out.go:298] Setting JSON to false
	I0328 11:55:15.814567   16809 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:15.814625   16809 notify.go:220] Checking for updates...
	I0328 11:55:15.814772   16809 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:15.814779   16809 status.go:255] checking status of ha-446000 ...
	I0328 11:55:15.815006   16809 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:15.815010   16809 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:15.815012   16809 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.200417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-446000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.636208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.714708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:15.986098   16819 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:15.986465   16819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.986475   16819 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:15.986478   16819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:15.986642   16819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:15.986869   16819 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:15.987058   16819 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:15.990748   16819 out.go:177] 
	W0328 11:55:15.994694   16819 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0328 11:55:15.994698   16819 out.go:239] * 
	* 
	W0328 11:55:15.996739   16819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:55:16.000702   16819 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0328 11:55:15.986098   16819 out.go:291] Setting OutFile to fd 1 ...
I0328 11:55:15.986465   16819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:55:15.986475   16819 out.go:304] Setting ErrFile to fd 2...
I0328 11:55:15.986478   16819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:55:15.986642   16819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:55:15.986869   16819 mustload.go:65] Loading cluster: ha-446000
I0328 11:55:15.987058   16819 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:55:15.990748   16819 out.go:177] 
W0328 11:55:15.994694   16819 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0328 11:55:15.994698   16819 out.go:239] * 
* 
W0328 11:55:15.996739   16819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0328 11:55:16.000702   16819 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-446000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (32.265125ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:16.036354   16821 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:16.036495   16821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:16.036498   16821 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:16.036500   16821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:16.036636   16821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:16.036751   16821 out.go:298] Setting JSON to false
	I0328 11:55:16.036762   16821 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:16.036820   16821 notify.go:220] Checking for updates...
	I0328 11:55:16.036963   16821 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:16.036969   16821 status.go:255] checking status of ha-446000 ...
	I0328 11:55:16.037175   16821 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:16.037179   16821 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:16.037181   16821 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (76.664083ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:17.565740   16825 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:17.565939   16825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:17.565944   16825 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:17.565947   16825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:17.566111   16825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:17.566266   16825 out.go:298] Setting JSON to false
	I0328 11:55:17.566281   16825 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:17.566319   16825 notify.go:220] Checking for updates...
	I0328 11:55:17.566530   16825 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:17.566538   16825 status.go:255] checking status of ha-446000 ...
	I0328 11:55:17.566810   16825 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:17.566815   16825 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:17.566818   16825 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (77.982042ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:19.607405   16829 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:19.607625   16829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:19.607630   16829 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:19.607633   16829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:19.607789   16829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:19.607944   16829 out.go:298] Setting JSON to false
	I0328 11:55:19.607961   16829 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:19.607996   16829 notify.go:220] Checking for updates...
	I0328 11:55:19.608209   16829 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:19.608217   16829 status.go:255] checking status of ha-446000 ...
	I0328 11:55:19.608477   16829 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:19.608482   16829 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:19.608485   16829 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (74.350375ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:21.421112   16832 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:21.421283   16832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:21.421288   16832 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:21.421291   16832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:21.421457   16832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:21.421612   16832 out.go:298] Setting JSON to false
	I0328 11:55:21.421627   16832 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:21.421661   16832 notify.go:220] Checking for updates...
	I0328 11:55:21.421887   16832 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:21.421895   16832 status.go:255] checking status of ha-446000 ...
	I0328 11:55:21.422165   16832 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:21.422170   16832 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:21.422173   16832 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (75.744542ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:25.364062   16834 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:25.364207   16834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:25.364211   16834 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:25.364214   16834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:25.364376   16834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:25.364538   16834 out.go:298] Setting JSON to false
	I0328 11:55:25.364552   16834 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:25.364594   16834 notify.go:220] Checking for updates...
	I0328 11:55:25.364803   16834 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:25.364811   16834 status.go:255] checking status of ha-446000 ...
	I0328 11:55:25.365083   16834 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:25.365087   16834 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:25.365090   16834 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (75.511875ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:32.267244   16840 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:32.267432   16840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:32.267436   16840 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:32.267440   16840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:32.267613   16840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:32.267792   16840 out.go:298] Setting JSON to false
	I0328 11:55:32.267807   16840 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:32.267851   16840 notify.go:220] Checking for updates...
	I0328 11:55:32.268117   16840 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:32.268125   16840 status.go:255] checking status of ha-446000 ...
	I0328 11:55:32.268404   16840 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:32.268408   16840 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:32.268411   16840 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (78.862375ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:36.533353   16842 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:36.533521   16842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:36.533526   16842 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:36.533529   16842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:36.533711   16842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:36.533899   16842 out.go:298] Setting JSON to false
	I0328 11:55:36.533914   16842 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:36.533944   16842 notify.go:220] Checking for updates...
	I0328 11:55:36.534173   16842 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:36.534180   16842 status.go:255] checking status of ha-446000 ...
	I0328 11:55:36.534441   16842 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:36.534446   16842 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:36.534449   16842 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (77.93325ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:55:45.526628   16844 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:55:45.526832   16844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:45.526836   16844 out.go:304] Setting ErrFile to fd 2...
	I0328 11:55:45.526839   16844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:55:45.527009   16844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:55:45.527161   16844 out.go:298] Setting JSON to false
	I0328 11:55:45.527176   16844 mustload.go:65] Loading cluster: ha-446000
	I0328 11:55:45.527238   16844 notify.go:220] Checking for updates...
	I0328 11:55:45.527428   16844 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:55:45.527435   16844 status.go:255] checking status of ha-446000 ...
	I0328 11:55:45.527680   16844 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:55:45.527685   16844 status.go:343] host is not running, skipping remaining checks
	I0328 11:55:45.527688   16844 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (75.515125ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:04.643049   16861 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:04.643253   16861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:04.643257   16861 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:04.643260   16861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:04.643421   16861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:04.643566   16861 out.go:298] Setting JSON to false
	I0328 11:56:04.643580   16861 mustload.go:65] Loading cluster: ha-446000
	I0328 11:56:04.643625   16861 notify.go:220] Checking for updates...
	I0328 11:56:04.643818   16861 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:04.643826   16861 status.go:255] checking status of ha-446000 ...
	I0328 11:56:04.644087   16861 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:56:04.644092   16861 status.go:343] host is not running, skipping remaining checks
	I0328 11:56:04.644094   16861 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (35.045667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-446000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-446000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.105209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-446000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-446000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-446000 -v=7 --alsologtostderr: (3.612925209s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-446000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-446000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219755208s)

                                                
                                                
-- stdout --
	* [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	* Restarting existing qemu2 VM for "ha-446000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-446000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:08.501817   16891 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:08.501987   16891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:08.501992   16891 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:08.501995   16891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:08.502152   16891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:08.503443   16891 out.go:298] Setting JSON to false
	I0328 11:56:08.522678   16891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10540,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:56:08.522739   16891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:56:08.526507   16891 out.go:177] * [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:56:08.534374   16891 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:56:08.537413   16891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:56:08.534409   16891 notify.go:220] Checking for updates...
	I0328 11:56:08.540323   16891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:56:08.543381   16891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:56:08.546424   16891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:56:08.547783   16891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:56:08.550696   16891 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:08.550757   16891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:56:08.555380   16891 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:56:08.560348   16891 start.go:297] selected driver: qemu2
	I0328 11:56:08.560353   16891 start.go:901] validating driver "qemu2" against &{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:56:08.560406   16891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:56:08.562744   16891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:56:08.562790   16891 cni.go:84] Creating CNI manager for ""
	I0328 11:56:08.562796   16891 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 11:56:08.562842   16891 start.go:340] cluster config:
	{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:56:08.567322   16891 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:56:08.574396   16891 out.go:177] * Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	I0328 11:56:08.578353   16891 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:56:08.578368   16891 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:56:08.578380   16891 cache.go:56] Caching tarball of preloaded images
	I0328 11:56:08.578431   16891 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:56:08.578437   16891 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:56:08.578522   16891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/ha-446000/config.json ...
	I0328 11:56:08.578964   16891 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:56:08.578999   16891 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "ha-446000"
	I0328 11:56:08.579009   16891 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:56:08.579015   16891 fix.go:54] fixHost starting: 
	I0328 11:56:08.579128   16891 fix.go:112] recreateIfNeeded on ha-446000: state=Stopped err=<nil>
	W0328 11:56:08.579137   16891 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:56:08.583353   16891 out.go:177] * Restarting existing qemu2 VM for "ha-446000" ...
	I0328 11:56:08.591511   16891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:74:5f:15:22:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:56:08.593565   16891 main.go:141] libmachine: STDOUT: 
	I0328 11:56:08.593585   16891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:56:08.593613   16891 fix.go:56] duration metric: took 14.597208ms for fixHost
	I0328 11:56:08.593618   16891 start.go:83] releasing machines lock for "ha-446000", held for 14.614375ms
	W0328 11:56:08.593624   16891 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:56:08.593656   16891 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:56:08.593661   16891 start.go:728] Will try again in 5 seconds ...
	I0328 11:56:13.595939   16891 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:56:13.596284   16891 start.go:364] duration metric: took 260.333µs to acquireMachinesLock for "ha-446000"
	I0328 11:56:13.596388   16891 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:56:13.596403   16891 fix.go:54] fixHost starting: 
	I0328 11:56:13.596816   16891 fix.go:112] recreateIfNeeded on ha-446000: state=Stopped err=<nil>
	W0328 11:56:13.596832   16891 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:56:13.604265   16891 out.go:177] * Restarting existing qemu2 VM for "ha-446000" ...
	I0328 11:56:13.607410   16891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:74:5f:15:22:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:56:13.617154   16891 main.go:141] libmachine: STDOUT: 
	I0328 11:56:13.617226   16891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:56:13.617339   16891 fix.go:56] duration metric: took 20.935417ms for fixHost
	I0328 11:56:13.617366   16891 start.go:83] releasing machines lock for "ha-446000", held for 21.065709ms
	W0328 11:56:13.617565   16891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:56:13.625283   16891 out.go:177] 
	W0328 11:56:13.628337   16891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:56:13.628365   16891 out.go:239] * 
	* 
	W0328 11:56:13.630979   16891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:56:13.638294   16891 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-446000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-446000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (34.932292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.377375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:13.790066   16907 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:13.790685   16907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:13.790690   16907 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:13.790693   16907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:13.790917   16907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:13.791363   16907 mustload.go:65] Loading cluster: ha-446000
	I0328 11:56:13.791536   16907 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:13.794991   16907 out.go:177] * The control-plane node ha-446000 host is not running: state=Stopped
	I0328 11:56:13.798931   16907 out.go:177]   To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-446000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (32.251542ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:13.834401   16909 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:13.834557   16909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:13.834560   16909 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:13.834562   16909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:13.834696   16909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:13.834816   16909 out.go:298] Setting JSON to false
	I0328 11:56:13.834827   16909 mustload.go:65] Loading cluster: ha-446000
	I0328 11:56:13.834885   16909 notify.go:220] Checking for updates...
	I0328 11:56:13.835034   16909 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:13.835041   16909 status.go:255] checking status of ha-446000 ...
	I0328 11:56:13.835228   16909 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:56:13.835231   16909 status.go:343] host is not running, skipping remaining checks
	I0328 11:56:13.835233   16909 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (31.912291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-446000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.021958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (4.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-446000 stop -v=7 --alsologtostderr: (4.04540125s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr: exit status 7 (69.478958ms)

                                                
                                                
-- stdout --
	ha-446000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:18.086853   16939 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:18.087045   16939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:18.087049   16939 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:18.087052   16939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:18.087230   16939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:18.087413   16939 out.go:298] Setting JSON to false
	I0328 11:56:18.087426   16939 mustload.go:65] Loading cluster: ha-446000
	I0328 11:56:18.087461   16939 notify.go:220] Checking for updates...
	I0328 11:56:18.087667   16939 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:18.087674   16939 status.go:255] checking status of ha-446000 ...
	I0328 11:56:18.087932   16939 status.go:330] ha-446000 host status = "Stopped" (err=<nil>)
	I0328 11:56:18.087937   16939 status.go:343] host is not running, skipping remaining checks
	I0328 11:56:18.087939   16939 status.go:257] ha-446000 status: &{Name:ha-446000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-446000 status -v=7 --alsologtostderr": ha-446000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (34.199708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (4.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-446000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-446000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.177371416s)

                                                
                                                
-- stdout --
	* [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	* Restarting existing qemu2 VM for "ha-446000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-446000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:18.153384   16943 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:18.153512   16943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:18.153515   16943 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:18.153518   16943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:18.153649   16943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:18.154684   16943 out.go:298] Setting JSON to false
	I0328 11:56:18.170683   16943 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10550,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:56:18.170750   16943 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:56:18.175114   16943 out.go:177] * [ha-446000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:56:18.181990   16943 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:56:18.182020   16943 notify.go:220] Checking for updates...
	I0328 11:56:18.186020   16943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:56:18.188884   16943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:56:18.191959   16943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:56:18.195022   16943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:56:18.197955   16943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:56:18.201267   16943 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:18.201528   16943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:56:18.206051   16943 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:56:18.212978   16943 start.go:297] selected driver: qemu2
	I0328 11:56:18.212983   16943 start.go:901] validating driver "qemu2" against &{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:56:18.213037   16943 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:56:18.215201   16943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:56:18.215245   16943 cni.go:84] Creating CNI manager for ""
	I0328 11:56:18.215250   16943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 11:56:18.215291   16943 start.go:340] cluster config:
	{Name:ha-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-446000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:56:18.219563   16943 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:56:18.225005   16943 out.go:177] * Starting "ha-446000" primary control-plane node in "ha-446000" cluster
	I0328 11:56:18.228951   16943 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:56:18.228966   16943 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:56:18.228981   16943 cache.go:56] Caching tarball of preloaded images
	I0328 11:56:18.229082   16943 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:56:18.229100   16943 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:56:18.229162   16943 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/ha-446000/config.json ...
	I0328 11:56:18.229605   16943 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:56:18.229639   16943 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "ha-446000"
	I0328 11:56:18.229649   16943 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:56:18.229655   16943 fix.go:54] fixHost starting: 
	I0328 11:56:18.229781   16943 fix.go:112] recreateIfNeeded on ha-446000: state=Stopped err=<nil>
	W0328 11:56:18.229792   16943 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:56:18.233947   16943 out.go:177] * Restarting existing qemu2 VM for "ha-446000" ...
	I0328 11:56:18.241795   16943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:74:5f:15:22:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:56:18.243845   16943 main.go:141] libmachine: STDOUT: 
	I0328 11:56:18.243866   16943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:56:18.243896   16943 fix.go:56] duration metric: took 14.240042ms for fixHost
	I0328 11:56:18.243901   16943 start.go:83] releasing machines lock for "ha-446000", held for 14.257584ms
	W0328 11:56:18.243907   16943 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:56:18.243937   16943 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:56:18.243943   16943 start.go:728] Will try again in 5 seconds ...
	I0328 11:56:23.245209   16943 start.go:360] acquireMachinesLock for ha-446000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:56:23.245527   16943 start.go:364] duration metric: took 246.917µs to acquireMachinesLock for "ha-446000"
	I0328 11:56:23.245647   16943 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:56:23.245663   16943 fix.go:54] fixHost starting: 
	I0328 11:56:23.246331   16943 fix.go:112] recreateIfNeeded on ha-446000: state=Stopped err=<nil>
	W0328 11:56:23.246358   16943 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:56:23.251764   16943 out.go:177] * Restarting existing qemu2 VM for "ha-446000" ...
	I0328 11:56:23.255040   16943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:74:5f:15:22:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/ha-446000/disk.qcow2
	I0328 11:56:23.264737   16943 main.go:141] libmachine: STDOUT: 
	I0328 11:56:23.264812   16943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:56:23.264890   16943 fix.go:56] duration metric: took 19.226667ms for fixHost
	I0328 11:56:23.264919   16943 start.go:83] releasing machines lock for "ha-446000", held for 19.370583ms
	W0328 11:56:23.265118   16943 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-446000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:56:23.271343   16943 out.go:177] 
	W0328 11:56:23.275766   16943 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:56:23.275789   16943 out.go:239] * 
	* 
	W0328 11:56:23.278228   16943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:56:23.286687   16943 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-446000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (69.829708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-446000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.196084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-446000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-446000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.394959ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:56:23.511605   16961 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:56:23.511752   16961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:23.511756   16961 out.go:304] Setting ErrFile to fd 2...
	I0328 11:56:23.511759   16961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:56:23.511901   16961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:56:23.512137   16961 mustload.go:65] Loading cluster: ha-446000
	I0328 11:56:23.512319   16961 config.go:182] Loaded profile config "ha-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:56:23.516849   16961 out.go:177] * The control-plane node ha-446000 host is not running: state=Stopped
	I0328 11:56:23.520899   16961 out.go:177]   To start a cluster, run: "minikube start -p ha-446000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-446000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (31.955292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-446000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-446000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-446000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-446000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-446000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-446000 -n ha-446000: exit status 7 (32.147417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-812000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-812000 --driver=qemu2 : exit status 80 (9.901236125s)

                                                
                                                
-- stdout --
	* [image-812000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-812000" primary control-plane node in "image-812000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-812000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-812000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-812000 -n image-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-812000 -n image-812000: exit status 7 (74.633959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-663000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-663000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.848151333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f2b838a-2850-4413-908e-334a2eb5e0a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-663000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"654169a1-4039-487e-b694-69110a2d6526","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17877"}}
	{"specversion":"1.0","id":"1737cd5b-9919-4744-9d01-0af7ff713559","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig"}}
	{"specversion":"1.0","id":"8e64b063-e3e1-4d3e-ac20-a1d5dfac2305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"61e8be82-7873-4b52-99c1-133706cae2bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f7d8c1d-44e7-46b5-a1fd-7c2f994df7c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube"}}
	{"specversion":"1.0","id":"e019e886-dd14-470a-bc88-e973e1d4445d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0c7541c-52cd-494f-99c1-99f9bbd36276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f2a18e3-a43c-4325-bea3-6ebe066b7cd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9fec4124-1a3b-48df-9043-3446fcc60bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-663000\" primary control-plane node in \"json-output-663000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f7a962e-c8f5-48b5-a8e0-a93db999fe9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ae646e09-67ec-4f29-b86c-45458f75b7c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-663000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ba0eecb-a80b-4efd-b9a6-fba70bafd5c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"73a9a3f1-bed8-4a52-bc22-6a6ab66dffa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1156fbd6-2b3d-438a-bfde-af0b43d6e759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-663000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0f587a01-c8f9-4e8d-be6a-c320357378fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"99cceae9-7072-428d-b149-3f475d6a4f92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-663000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-663000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-663000 --output=json --user=testUser: exit status 83 (81.592583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c2ffe3b-bdd7-41cc-8fbf-e98ff4781b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-663000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"dd70441b-5f50-46e6-a4ae-990cbd20a87c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-663000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-663000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-663000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-663000 --output=json --user=testUser: exit status 83 (47.314709ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-663000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-663000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-663000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-908000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-908000 --driver=qemu2 : exit status 80 (9.77820925s)

                                                
                                                
-- stdout --
	* [first-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-908000" primary control-plane node in "first-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-908000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-28 11:56:57.441866 -0700 PDT m=+542.205521460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-909000 -n second-909000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-909000 -n second-909000: exit status 85 (85.428417ms)

                                                
                                                
-- stdout --
	* Profile "second-909000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-909000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-909000" host is not running, skipping log retrieval (state="* Profile \"second-909000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-909000\"")
helpers_test.go:175: Cleaning up "second-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-909000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-28 11:56:57.760785 -0700 PDT m=+542.524436335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-908000 -n first-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-908000 -n first-908000: exit status 7 (32.549291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-908000
--- FAIL: TestMinikubeProfile (10.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.596129542s)

                                                
                                                
-- stdout --
	* [mount-start-1-850000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-850000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-850000 -n mount-start-1-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-850000 -n mount-start-1-850000: exit status 7 (70.28425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.67s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-652000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-652000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.956075125s)

                                                
                                                
-- stdout --
	* [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:57:08.936007   17145 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:57:08.936117   17145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:57:08.936120   17145 out.go:304] Setting ErrFile to fd 2...
	I0328 11:57:08.936123   17145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:57:08.936259   17145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:57:08.937309   17145 out.go:298] Setting JSON to false
	I0328 11:57:08.953324   17145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10600,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:57:08.953382   17145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:57:08.959533   17145 out.go:177] * [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:57:08.967562   17145 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:57:08.971504   17145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:57:08.967615   17145 notify.go:220] Checking for updates...
	I0328 11:57:08.974563   17145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:57:08.977506   17145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:57:08.981482   17145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:57:08.984515   17145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:57:08.987710   17145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:57:08.991470   17145 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 11:57:08.998476   17145 start.go:297] selected driver: qemu2
	I0328 11:57:08.998482   17145 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:57:08.998490   17145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:57:09.000819   17145 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:57:09.005516   17145 out.go:177] * Automatically selected the socket_vmnet network
	I0328 11:57:09.008609   17145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:57:09.008649   17145 cni.go:84] Creating CNI manager for ""
	I0328 11:57:09.008654   17145 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0328 11:57:09.008661   17145 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 11:57:09.008699   17145 start.go:340] cluster config:
	{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:57:09.013353   17145 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:57:09.021501   17145 out.go:177] * Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	I0328 11:57:09.025535   17145 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:57:09.025550   17145 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:57:09.025557   17145 cache.go:56] Caching tarball of preloaded images
	I0328 11:57:09.025623   17145 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:57:09.025629   17145 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:57:09.025872   17145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/multinode-652000/config.json ...
	I0328 11:57:09.025884   17145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/multinode-652000/config.json: {Name:mk15ba894ffd5430312a3715aeeb964a15b48b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:57:09.026109   17145 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:57:09.026142   17145 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "multinode-652000"
	I0328 11:57:09.026156   17145 start.go:93] Provisioning new machine with config: &{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:57:09.026185   17145 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:57:09.033517   17145 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 11:57:09.051341   17145 start.go:159] libmachine.API.Create for "multinode-652000" (driver="qemu2")
	I0328 11:57:09.051370   17145 client.go:168] LocalClient.Create starting
	I0328 11:57:09.051456   17145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:57:09.051489   17145 main.go:141] libmachine: Decoding PEM data...
	I0328 11:57:09.051500   17145 main.go:141] libmachine: Parsing certificate...
	I0328 11:57:09.051544   17145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:57:09.051568   17145 main.go:141] libmachine: Decoding PEM data...
	I0328 11:57:09.051576   17145 main.go:141] libmachine: Parsing certificate...
	I0328 11:57:09.051966   17145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:57:09.200384   17145 main.go:141] libmachine: Creating SSH key...
	I0328 11:57:09.310174   17145 main.go:141] libmachine: Creating Disk image...
	I0328 11:57:09.310182   17145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:57:09.310406   17145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:09.322363   17145 main.go:141] libmachine: STDOUT: 
	I0328 11:57:09.322395   17145 main.go:141] libmachine: STDERR: 
	I0328 11:57:09.322470   17145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2 +20000M
	I0328 11:57:09.333947   17145 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:57:09.333969   17145 main.go:141] libmachine: STDERR: 
	I0328 11:57:09.333987   17145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:09.333991   17145 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:57:09.334025   17145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:56:33:0d:10:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:09.335881   17145 main.go:141] libmachine: STDOUT: 
	I0328 11:57:09.335897   17145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:57:09.335919   17145 client.go:171] duration metric: took 284.540334ms to LocalClient.Create
	I0328 11:57:11.338186   17145 start.go:128] duration metric: took 2.311949417s to createHost
	I0328 11:57:11.338277   17145 start.go:83] releasing machines lock for "multinode-652000", held for 2.3120985s
	W0328 11:57:11.338326   17145 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:57:11.349276   17145 out.go:177] * Deleting "multinode-652000" in qemu2 ...
	W0328 11:57:11.391175   17145 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:57:11.391196   17145 start.go:728] Will try again in 5 seconds ...
	I0328 11:57:16.393456   17145 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:57:16.393904   17145 start.go:364] duration metric: took 338.625µs to acquireMachinesLock for "multinode-652000"
	I0328 11:57:16.394048   17145 start.go:93] Provisioning new machine with config: &{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 11:57:16.394331   17145 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 11:57:16.399373   17145 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 11:57:16.449964   17145 start.go:159] libmachine.API.Create for "multinode-652000" (driver="qemu2")
	I0328 11:57:16.450011   17145 client.go:168] LocalClient.Create starting
	I0328 11:57:16.450121   17145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 11:57:16.450185   17145 main.go:141] libmachine: Decoding PEM data...
	I0328 11:57:16.450211   17145 main.go:141] libmachine: Parsing certificate...
	I0328 11:57:16.450275   17145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 11:57:16.450322   17145 main.go:141] libmachine: Decoding PEM data...
	I0328 11:57:16.450337   17145 main.go:141] libmachine: Parsing certificate...
	I0328 11:57:16.450974   17145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 11:57:16.607906   17145 main.go:141] libmachine: Creating SSH key...
	I0328 11:57:16.787371   17145 main.go:141] libmachine: Creating Disk image...
	I0328 11:57:16.787377   17145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 11:57:16.787600   17145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:16.800318   17145 main.go:141] libmachine: STDOUT: 
	I0328 11:57:16.800352   17145 main.go:141] libmachine: STDERR: 
	I0328 11:57:16.800411   17145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2 +20000M
	I0328 11:57:16.811050   17145 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 11:57:16.811067   17145 main.go:141] libmachine: STDERR: 
	I0328 11:57:16.811081   17145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:16.811086   17145 main.go:141] libmachine: Starting QEMU VM...
	I0328 11:57:16.811124   17145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:1d:2e:2a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:57:16.812827   17145 main.go:141] libmachine: STDOUT: 
	I0328 11:57:16.812846   17145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:57:16.812859   17145 client.go:171] duration metric: took 362.837ms to LocalClient.Create
	I0328 11:57:18.815072   17145 start.go:128] duration metric: took 2.42068175s to createHost
	I0328 11:57:18.815135   17145 start.go:83] releasing machines lock for "multinode-652000", held for 2.421174209s
	W0328 11:57:18.815546   17145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:57:18.832234   17145 out.go:177] 
	W0328 11:57:18.837331   17145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:57:18.837363   17145 out.go:239] * 
	* 
	W0328 11:57:18.839514   17145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:57:18.851175   17145 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-652000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (74.502791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (79.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.449416ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-652000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- rollout status deployment/busybox: exit status 1 (59.238833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.159125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.37975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.753209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.96525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.859667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.203583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.313917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.520083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.380458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.288792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.3415ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.699875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.89375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.309458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.136208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (79.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-652000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.641458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.536417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-652000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-652000 -v 3 --alsologtostderr: exit status 83 (42.021166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-652000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:38.138385   17246 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:38.138524   17246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.138527   17246 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:38.138530   17246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.138646   17246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:38.138879   17246 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:38.139063   17246 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:38.143773   17246 out.go:177] * The control-plane node multinode-652000 host is not running: state=Stopped
	I0328 11:58:38.146589   17246 out.go:177]   To start a cluster, run: "minikube start -p multinode-652000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-652000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (31.860792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-652000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-652000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.854667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-652000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-652000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-652000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.187167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-652000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-652000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-652000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-652000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (31.786208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status --output json --alsologtostderr: exit status 7 (32.449875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-652000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:38.378524   17259 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:38.378657   17259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.378660   17259 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:38.378662   17259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.378799   17259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:38.378918   17259 out.go:298] Setting JSON to true
	I0328 11:58:38.378934   17259 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:38.378985   17259 notify.go:220] Checking for updates...
	I0328 11:58:38.379134   17259 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:38.379141   17259 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:38.379340   17259 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:38.379344   17259 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:38.379346   17259 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-652000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (31.489291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 node stop m03: exit status 85 (51.1645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-652000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status: exit status 7 (31.87775ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr: exit status 7 (32.16375ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:38.525958   17267 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:38.526087   17267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.526090   17267 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:38.526093   17267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.526226   17267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:38.526357   17267 out.go:298] Setting JSON to false
	I0328 11:58:38.526368   17267 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:38.526416   17267 notify.go:220] Checking for updates...
	I0328 11:58:38.526559   17267 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:38.526565   17267 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:38.526778   17267 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:38.526782   17267 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:38.526784   17267 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr": multinode-652000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.164791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.880583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:38.591457   17271 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:38.591919   17271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.591923   17271 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:38.591926   17271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.592082   17271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:38.592300   17271 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:38.592486   17271 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:38.595738   17271 out.go:177] 
	W0328 11:58:38.598572   17271 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0328 11:58:38.598577   17271 out.go:239] * 
	* 
	W0328 11:58:38.600724   17271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:58:38.604595   17271 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0328 11:58:38.591457   17271 out.go:291] Setting OutFile to fd 1 ...
I0328 11:58:38.591919   17271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:58:38.591923   17271 out.go:304] Setting ErrFile to fd 2...
I0328 11:58:38.591926   17271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 11:58:38.592082   17271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
I0328 11:58:38.592300   17271 mustload.go:65] Loading cluster: multinode-652000
I0328 11:58:38.592486   17271 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0328 11:58:38.595738   17271 out.go:177] 
W0328 11:58:38.598572   17271 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0328 11:58:38.598577   17271 out.go:239] * 
* 
W0328 11:58:38.600724   17271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0328 11:58:38.604595   17271 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-652000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (32.626041ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:38.639630   17273 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:38.639781   17273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.639784   17273 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:38.639786   17273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:38.639921   17273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:38.640051   17273 out.go:298] Setting JSON to false
	I0328 11:58:38.640062   17273 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:38.640111   17273 notify.go:220] Checking for updates...
	I0328 11:58:38.640279   17273 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:38.640285   17273 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:38.640481   17273 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:38.640485   17273 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:38.640487   17273 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (73.426459ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:39.741298   17276 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:39.741484   17276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:39.741489   17276 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:39.741492   17276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:39.741669   17276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:39.741838   17276 out.go:298] Setting JSON to false
	I0328 11:58:39.741854   17276 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:39.741894   17276 notify.go:220] Checking for updates...
	I0328 11:58:39.742115   17276 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:39.742122   17276 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:39.742420   17276 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:39.742425   17276 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:39.742428   17276 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (76.838583ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:41.936432   17281 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:41.936629   17281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:41.936633   17281 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:41.936636   17281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:41.936811   17281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:41.936970   17281 out.go:298] Setting JSON to false
	I0328 11:58:41.936990   17281 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:41.937026   17281 notify.go:220] Checking for updates...
	I0328 11:58:41.937249   17281 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:41.937256   17281 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:41.937515   17281 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:41.937520   17281 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:41.937523   17281 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (76.247291ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:44.964234   17283 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:44.964390   17283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:44.964394   17283 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:44.964397   17283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:44.964560   17283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:44.964712   17283 out.go:298] Setting JSON to false
	I0328 11:58:44.964727   17283 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:44.964767   17283 notify.go:220] Checking for updates...
	I0328 11:58:44.964998   17283 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:44.965007   17283 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:44.965300   17283 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:44.965304   17283 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:44.965307   17283 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (78.138625ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:48.593246   17287 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:48.593413   17287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:48.593417   17287 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:48.593420   17287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:48.593578   17287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:48.593730   17287 out.go:298] Setting JSON to false
	I0328 11:58:48.593745   17287 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:48.593779   17287 notify.go:220] Checking for updates...
	I0328 11:58:48.594003   17287 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:48.594011   17287 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:48.594308   17287 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:48.594313   17287 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:48.594316   17287 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (76.195ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:51.422663   17294 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:51.422821   17294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:51.422825   17294 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:51.422829   17294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:51.422987   17294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:51.423150   17294 out.go:298] Setting JSON to false
	I0328 11:58:51.423164   17294 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:51.423196   17294 notify.go:220] Checking for updates...
	I0328 11:58:51.423409   17294 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:51.423416   17294 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:51.423714   17294 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:51.423719   17294 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:51.423722   17294 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (77.293583ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:58:56.983007   17298 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:58:56.983173   17298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:56.983177   17298 out.go:304] Setting ErrFile to fd 2...
	I0328 11:58:56.983179   17298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:58:56.983349   17298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:58:56.983511   17298 out.go:298] Setting JSON to false
	I0328 11:58:56.983526   17298 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:58:56.983564   17298 notify.go:220] Checking for updates...
	I0328 11:58:56.983775   17298 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:58:56.983783   17298 status.go:255] checking status of multinode-652000 ...
	I0328 11:58:56.984056   17298 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:58:56.984061   17298 status.go:343] host is not running, skipping remaining checks
	I0328 11:58:56.984064   17298 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (76.242333ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:11.814555   17309 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:11.814777   17309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:11.814781   17309 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:11.814784   17309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:11.814945   17309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:11.815101   17309 out.go:298] Setting JSON to false
	I0328 11:59:11.815117   17309 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:59:11.815156   17309 notify.go:220] Checking for updates...
	I0328 11:59:11.815348   17309 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:11.815356   17309 status.go:255] checking status of multinode-652000 ...
	I0328 11:59:11.815666   17309 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:59:11.815671   17309 status.go:343] host is not running, skipping remaining checks
	I0328 11:59:11.815674   17309 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr: exit status 7 (78.443959ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:34.830822   17321 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:34.831031   17321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:34.831036   17321 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:34.831039   17321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:34.831191   17321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:34.831376   17321 out.go:298] Setting JSON to false
	I0328 11:59:34.831394   17321 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:59:34.831418   17321 notify.go:220] Checking for updates...
	I0328 11:59:34.831664   17321 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:34.831672   17321 status.go:255] checking status of multinode-652000 ...
	I0328 11:59:34.831949   17321 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:59:34.831954   17321 status.go:343] host is not running, skipping remaining checks
	I0328 11:59:34.831957   17321 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-652000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (34.695542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-652000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-652000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-652000: (1.958324917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-652000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-652000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222610208s)

                                                
                                                
-- stdout --
	* [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	* Restarting existing qemu2 VM for "multinode-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:36.921541   17339 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:36.921702   17339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:36.921706   17339 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:36.921709   17339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:36.921861   17339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:36.922985   17339 out.go:298] Setting JSON to false
	I0328 11:59:36.942128   17339 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10748,"bootTime":1711641628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:59:36.942197   17339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:59:36.946842   17339 out.go:177] * [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:59:36.955025   17339 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:59:36.955063   17339 notify.go:220] Checking for updates...
	I0328 11:59:36.958948   17339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:59:36.961950   17339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:59:36.964974   17339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:59:36.967913   17339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:59:36.970948   17339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:59:36.974236   17339 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:36.974298   17339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:59:36.977951   17339 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:59:36.984962   17339 start.go:297] selected driver: qemu2
	I0328 11:59:36.984969   17339 start.go:901] validating driver "qemu2" against &{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:59:36.985044   17339 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:59:36.987398   17339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:59:36.987448   17339 cni.go:84] Creating CNI manager for ""
	I0328 11:59:36.987454   17339 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 11:59:36.987505   17339 start.go:340] cluster config:
	{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:59:36.991990   17339 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:59:36.998944   17339 out.go:177] * Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	I0328 11:59:37.002976   17339 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:59:37.002998   17339 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:59:37.003007   17339 cache.go:56] Caching tarball of preloaded images
	I0328 11:59:37.003062   17339 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:59:37.003067   17339 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:59:37.003124   17339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/multinode-652000/config.json ...
	I0328 11:59:37.003623   17339 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:59:37.003658   17339 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "multinode-652000"
	I0328 11:59:37.003668   17339 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:59:37.003675   17339 fix.go:54] fixHost starting: 
	I0328 11:59:37.003806   17339 fix.go:112] recreateIfNeeded on multinode-652000: state=Stopped err=<nil>
	W0328 11:59:37.003815   17339 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:59:37.007933   17339 out.go:177] * Restarting existing qemu2 VM for "multinode-652000" ...
	I0328 11:59:37.015924   17339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:1d:2e:2a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:59:37.018134   17339 main.go:141] libmachine: STDOUT: 
	I0328 11:59:37.018155   17339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:59:37.018198   17339 fix.go:56] duration metric: took 14.521834ms for fixHost
	I0328 11:59:37.018204   17339 start.go:83] releasing machines lock for "multinode-652000", held for 14.540833ms
	W0328 11:59:37.018211   17339 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:59:37.018257   17339 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:59:37.018263   17339 start.go:728] Will try again in 5 seconds ...
	I0328 11:59:42.020347   17339 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:59:42.020709   17339 start.go:364] duration metric: took 267.583µs to acquireMachinesLock for "multinode-652000"
	I0328 11:59:42.020820   17339 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:59:42.020840   17339 fix.go:54] fixHost starting: 
	I0328 11:59:42.021536   17339 fix.go:112] recreateIfNeeded on multinode-652000: state=Stopped err=<nil>
	W0328 11:59:42.021567   17339 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:59:42.026466   17339 out.go:177] * Restarting existing qemu2 VM for "multinode-652000" ...
	I0328 11:59:42.031606   17339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:1d:2e:2a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:59:42.041326   17339 main.go:141] libmachine: STDOUT: 
	I0328 11:59:42.041409   17339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:59:42.041484   17339 fix.go:56] duration metric: took 20.645916ms for fixHost
	I0328 11:59:42.041503   17339 start.go:83] releasing machines lock for "multinode-652000", held for 20.7675ms
	W0328 11:59:42.041675   17339 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:59:42.050435   17339 out.go:177] 
	W0328 11:59:42.054566   17339 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:59:42.054594   17339 out.go:239] * 
	* 
	W0328 11:59:42.057550   17339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:59:42.066400   17339 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-652000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-652000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (34.610042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 node delete m03: exit status 83 (44.746709ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-652000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-652000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr: exit status 7 (32.565083ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:42.262562   17353 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:42.262717   17353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:42.262720   17353 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:42.262722   17353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:42.262854   17353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:42.262996   17353 out.go:298] Setting JSON to false
	I0328 11:59:42.263007   17353 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:59:42.263057   17353 notify.go:220] Checking for updates...
	I0328 11:59:42.263204   17353 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:42.263210   17353 status.go:255] checking status of multinode-652000 ...
	I0328 11:59:42.263415   17353 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:59:42.263418   17353 status.go:343] host is not running, skipping remaining checks
	I0328 11:59:42.263421   17353 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.005333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-652000 stop: (2.139149333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status: exit status 7 (70.968292ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr: exit status 7 (34.047292ms)

                                                
                                                
-- stdout --
	multinode-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:44.539504   17371 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:44.539626   17371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:44.539630   17371 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:44.539632   17371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:44.539744   17371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:44.539862   17371 out.go:298] Setting JSON to false
	I0328 11:59:44.539877   17371 mustload.go:65] Loading cluster: multinode-652000
	I0328 11:59:44.539937   17371 notify.go:220] Checking for updates...
	I0328 11:59:44.540052   17371 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:44.540059   17371 status.go:255] checking status of multinode-652000 ...
	I0328 11:59:44.540285   17371 status.go:330] multinode-652000 host status = "Stopped" (err=<nil>)
	I0328 11:59:44.540289   17371 status.go:343] host is not running, skipping remaining checks
	I0328 11:59:44.540291   17371 status.go:257] multinode-652000 status: &{Name:multinode-652000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr": multinode-652000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-652000 status --alsologtostderr": multinode-652000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.142917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-652000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-652000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188760583s)

                                                
                                                
-- stdout --
	* [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	* Restarting existing qemu2 VM for "multinode-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:59:44.603280   17375 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:59:44.603418   17375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:44.603421   17375 out.go:304] Setting ErrFile to fd 2...
	I0328 11:59:44.603423   17375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:59:44.603539   17375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:59:44.604580   17375 out.go:298] Setting JSON to false
	I0328 11:59:44.620540   17375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10756,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:59:44.620598   17375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:59:44.626112   17375 out.go:177] * [multinode-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:59:44.634013   17375 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:59:44.634075   17375 notify.go:220] Checking for updates...
	I0328 11:59:44.642042   17375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:59:44.645003   17375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:59:44.648038   17375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:59:44.651107   17375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:59:44.653996   17375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:59:44.657288   17375 config.go:182] Loaded profile config "multinode-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:59:44.657554   17375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:59:44.662030   17375 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:59:44.669054   17375 start.go:297] selected driver: qemu2
	I0328 11:59:44.669062   17375 start.go:901] validating driver "qemu2" against &{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:59:44.669128   17375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:59:44.671375   17375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 11:59:44.671421   17375 cni.go:84] Creating CNI manager for ""
	I0328 11:59:44.671425   17375 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 11:59:44.671467   17375 start.go:340] cluster config:
	{Name:multinode-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-652000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:59:44.675769   17375 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:59:44.681038   17375 out.go:177] * Starting "multinode-652000" primary control-plane node in "multinode-652000" cluster
	I0328 11:59:44.685046   17375 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:59:44.685061   17375 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:59:44.685075   17375 cache.go:56] Caching tarball of preloaded images
	I0328 11:59:44.685129   17375 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 11:59:44.685134   17375 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:59:44.685203   17375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/multinode-652000/config.json ...
	I0328 11:59:44.685678   17375 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:59:44.685704   17375 start.go:364] duration metric: took 20.709µs to acquireMachinesLock for "multinode-652000"
	I0328 11:59:44.685712   17375 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:59:44.685717   17375 fix.go:54] fixHost starting: 
	I0328 11:59:44.685836   17375 fix.go:112] recreateIfNeeded on multinode-652000: state=Stopped err=<nil>
	W0328 11:59:44.685845   17375 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:59:44.693987   17375 out.go:177] * Restarting existing qemu2 VM for "multinode-652000" ...
	I0328 11:59:44.697886   17375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:1d:2e:2a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:59:44.699899   17375 main.go:141] libmachine: STDOUT: 
	I0328 11:59:44.699921   17375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:59:44.699952   17375 fix.go:56] duration metric: took 14.234875ms for fixHost
	I0328 11:59:44.699956   17375 start.go:83] releasing machines lock for "multinode-652000", held for 14.248375ms
	W0328 11:59:44.699963   17375 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:59:44.700002   17375 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:59:44.700007   17375 start.go:728] Will try again in 5 seconds ...
	I0328 11:59:49.702269   17375 start.go:360] acquireMachinesLock for multinode-652000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 11:59:49.702649   17375 start.go:364] duration metric: took 282.542µs to acquireMachinesLock for "multinode-652000"
	I0328 11:59:49.702762   17375 start.go:96] Skipping create...Using existing machine configuration
	I0328 11:59:49.702781   17375 fix.go:54] fixHost starting: 
	I0328 11:59:49.703445   17375 fix.go:112] recreateIfNeeded on multinode-652000: state=Stopped err=<nil>
	W0328 11:59:49.703469   17375 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 11:59:49.708540   17375 out.go:177] * Restarting existing qemu2 VM for "multinode-652000" ...
	I0328 11:59:49.716628   17375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:1d:2e:2a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/multinode-652000/disk.qcow2
	I0328 11:59:49.726327   17375 main.go:141] libmachine: STDOUT: 
	I0328 11:59:49.726394   17375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 11:59:49.726464   17375 fix.go:56] duration metric: took 23.679459ms for fixHost
	I0328 11:59:49.726481   17375 start.go:83] releasing machines lock for "multinode-652000", held for 23.809042ms
	W0328 11:59:49.726675   17375 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 11:59:49.734442   17375 out.go:177] 
	W0328 11:59:49.738553   17375 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 11:59:49.738597   17375 out.go:239] * 
	* 
	W0328 11:59:49.741323   17375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:59:49.748484   17375 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-652000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (67.39725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-652000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-652000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-652000-m01 --driver=qemu2 : exit status 80 (10.598126417s)

                                                
                                                
-- stdout --
	* [multinode-652000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-652000-m01" primary control-plane node in "multinode-652000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-652000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-652000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-652000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-652000-m02 --driver=qemu2 : exit status 80 (10.839937708s)

                                                
                                                
-- stdout --
	* [multinode-652000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-652000-m02" primary control-plane node in "multinode-652000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-652000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-652000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-652000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-652000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-652000: exit status 83 (81.295708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-652000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-652000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-652000 -n multinode-652000: exit status 7 (32.577709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.70s)

                                                
                                    
x
+
TestPreload (9.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-866000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-866000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.788348291s)

                                                
                                                
-- stdout --
	* [test-preload-866000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-866000" primary control-plane node in "test-preload-866000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-866000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:00:11.701879   17460 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:00:11.701991   17460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:11.701995   17460 out.go:304] Setting ErrFile to fd 2...
	I0328 12:00:11.701996   17460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:00:11.702104   17460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:00:11.703193   17460 out.go:298] Setting JSON to false
	I0328 12:00:11.719395   17460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10783,"bootTime":1711641628,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:00:11.719462   17460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:00:11.725645   17460 out.go:177] * [test-preload-866000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:00:11.733685   17460 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:00:11.738643   17460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:00:11.733720   17460 notify.go:220] Checking for updates...
	I0328 12:00:11.741711   17460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:00:11.744665   17460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:00:11.748609   17460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:00:11.751635   17460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:00:11.755073   17460 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:00:11.755123   17460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:00:11.759630   17460 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:00:11.766665   17460 start.go:297] selected driver: qemu2
	I0328 12:00:11.766671   17460 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:00:11.766678   17460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:00:11.768974   17460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:00:11.771615   17460 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:00:11.774748   17460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:00:11.774782   17460 cni.go:84] Creating CNI manager for ""
	I0328 12:00:11.774791   17460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:00:11.774795   17460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:00:11.774830   17460 start.go:340] cluster config:
	{Name:test-preload-866000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-866000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:00:11.779284   17460 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.784600   17460 out.go:177] * Starting "test-preload-866000" primary control-plane node in "test-preload-866000" cluster
	I0328 12:00:11.788665   17460 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0328 12:00:11.788762   17460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/test-preload-866000/config.json ...
	I0328 12:00:11.788779   17460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/test-preload-866000/config.json: {Name:mk37ad820b25a8b083a07ea54048efa8ca889a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:00:11.788796   17460 cache.go:107] acquiring lock: {Name:mk304b79d606e7d0512c2951bcac95d35ef30546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.788810   17460 cache.go:107] acquiring lock: {Name:mkc74c271beff878b61df90b72e591a3208cd7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.788851   17460 cache.go:107] acquiring lock: {Name:mk0191f362bc34ad8b60a2d0ca01de1d89450add Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.789054   17460 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:00:11.789056   17460 start.go:360] acquireMachinesLock for test-preload-866000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:00:11.789075   17460 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 12:00:11.789050   17460 cache.go:107] acquiring lock: {Name:mkc96c550ad2566433c7c71f6ef5435732d74d9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.789067   17460 cache.go:107] acquiring lock: {Name:mk85d354cbf1fd2b77a36f529cbe7baefc84af15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.789082   17460 cache.go:107] acquiring lock: {Name:mka2a2504bab7025d8c91a4237718e32a58238ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.789100   17460 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "test-preload-866000"
	I0328 12:00:11.789122   17460 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0328 12:00:11.789128   17460 cache.go:107] acquiring lock: {Name:mk1211227fd432768b5be3d864f144ed9ce5201e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.788817   17460 cache.go:107] acquiring lock: {Name:mk26af5be3159ed9288ebbf4cb472c9a6b28442d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:00:11.789119   17460 start.go:93] Provisioning new machine with config: &{Name:test-preload-866000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-866000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:00:11.789174   17460 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:00:11.793688   17460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:00:11.789313   17460 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0328 12:00:11.789424   17460 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0328 12:00:11.789705   17460 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:00:11.789733   17460 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0328 12:00:11.794425   17460 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:00:11.800159   17460 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:00:11.801571   17460 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:00:11.801587   17460 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0328 12:00:11.801651   17460 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 12:00:11.801667   17460 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0328 12:00:11.801676   17460 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0328 12:00:11.803294   17460 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:00:11.803378   17460 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0328 12:00:11.811543   17460 start.go:159] libmachine.API.Create for "test-preload-866000" (driver="qemu2")
	I0328 12:00:11.811568   17460 client.go:168] LocalClient.Create starting
	I0328 12:00:11.811643   17460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:00:11.811675   17460 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:11.811683   17460 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:11.811725   17460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:00:11.811746   17460 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:11.811753   17460 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:11.812130   17460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:00:11.963999   17460 main.go:141] libmachine: Creating SSH key...
	I0328 12:00:12.003049   17460 main.go:141] libmachine: Creating Disk image...
	I0328 12:00:12.003077   17460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:00:12.003308   17460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:12.016377   17460 main.go:141] libmachine: STDOUT: 
	I0328 12:00:12.016406   17460 main.go:141] libmachine: STDERR: 
	I0328 12:00:12.016496   17460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2 +20000M
	I0328 12:00:12.028988   17460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:00:12.029018   17460 main.go:141] libmachine: STDERR: 
	I0328 12:00:12.029037   17460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:12.029042   17460 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:00:12.029073   17460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:9e:10:58:f4:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:12.031066   17460 main.go:141] libmachine: STDOUT: 
	I0328 12:00:12.031084   17460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:00:12.031103   17460 client.go:171] duration metric: took 219.5285ms to LocalClient.Create
	I0328 12:00:14.033341   17460 start.go:128] duration metric: took 2.244115416s to createHost
	I0328 12:00:14.033432   17460 start.go:83] releasing machines lock for "test-preload-866000", held for 2.24429575s
	W0328 12:00:14.033483   17460 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:14.049121   17460 out.go:177] * Deleting "test-preload-866000" in qemu2 ...
	W0328 12:00:14.076109   17460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:14.076141   17460 start.go:728] Will try again in 5 seconds ...
	W0328 12:00:14.262067   17460 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0328 12:00:14.262183   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 12:00:14.458266   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0328 12:00:14.460316   17460 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0328 12:00:14.460380   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0328 12:00:14.468836   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0328 12:00:14.472671   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0328 12:00:14.474247   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0328 12:00:14.477643   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0328 12:00:14.479958   17460 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0328 12:00:14.596848   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0328 12:00:14.596898   17460 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.807823583s
	I0328 12:00:14.596944   17460 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0328 12:00:15.903597   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0328 12:00:15.903645   17460 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.114797208s
	I0328 12:00:15.903673   17460 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0328 12:00:16.007798   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0328 12:00:16.007842   17460 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.218748417s
	I0328 12:00:16.007888   17460 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0328 12:00:16.196237   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0328 12:00:16.196290   17460 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.407385958s
	I0328 12:00:16.196335   17460 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0328 12:00:17.452900   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0328 12:00:17.452962   17460 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.66388275s
	I0328 12:00:17.452989   17460 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0328 12:00:17.522002   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0328 12:00:17.522071   17460 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.733202084s
	I0328 12:00:17.522095   17460 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0328 12:00:18.471057   17460 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0328 12:00:18.471105   17460 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.6822175s
	I0328 12:00:18.471132   17460 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0328 12:00:19.076611   17460 start.go:360] acquireMachinesLock for test-preload-866000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:00:19.076989   17460 start.go:364] duration metric: took 297.458µs to acquireMachinesLock for "test-preload-866000"
	I0328 12:00:19.077122   17460 start.go:93] Provisioning new machine with config: &{Name:test-preload-866000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-866000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:00:19.077366   17460 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:00:19.086025   17460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:00:19.136405   17460 start.go:159] libmachine.API.Create for "test-preload-866000" (driver="qemu2")
	I0328 12:00:19.136465   17460 client.go:168] LocalClient.Create starting
	I0328 12:00:19.136598   17460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:00:19.136662   17460 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:19.136677   17460 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:19.136736   17460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:00:19.136778   17460 main.go:141] libmachine: Decoding PEM data...
	I0328 12:00:19.136792   17460 main.go:141] libmachine: Parsing certificate...
	I0328 12:00:19.137309   17460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:00:19.310106   17460 main.go:141] libmachine: Creating SSH key...
	I0328 12:00:19.385618   17460 main.go:141] libmachine: Creating Disk image...
	I0328 12:00:19.385623   17460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:00:19.385809   17460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:19.398389   17460 main.go:141] libmachine: STDOUT: 
	I0328 12:00:19.398421   17460 main.go:141] libmachine: STDERR: 
	I0328 12:00:19.398485   17460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2 +20000M
	I0328 12:00:19.409601   17460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:00:19.409637   17460 main.go:141] libmachine: STDERR: 
	I0328 12:00:19.409652   17460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:19.409655   17460 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:00:19.409694   17460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:05:8d:a9:7b:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/test-preload-866000/disk.qcow2
	I0328 12:00:19.411606   17460 main.go:141] libmachine: STDOUT: 
	I0328 12:00:19.411626   17460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:00:19.411638   17460 client.go:171] duration metric: took 275.165416ms to LocalClient.Create
	I0328 12:00:21.412162   17460 start.go:128] duration metric: took 2.334708458s to createHost
	I0328 12:00:21.412242   17460 start.go:83] releasing machines lock for "test-preload-866000", held for 2.335179791s
	W0328 12:00:21.412565   17460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-866000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-866000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:00:21.424283   17460 out.go:177] 
	W0328 12:00:21.432227   17460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:00:21.432268   17460 out.go:239] * 
	* 
	W0328 12:00:21.436172   17460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:00:21.444264   17460 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-866000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-28 12:00:21.462401 -0700 PDT m=+746.223651543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-866000 -n test-preload-866000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-866000 -n test-preload-866000: exit status 7 (54.772041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-866000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-866000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-866000
--- FAIL: TestPreload (9.95s)

                                                
                                    
x
+
TestScheduledStopUnix (10.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-160000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-160000 --memory=2048 --driver=qemu2 : exit status 80 (10.009712125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-160000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-160000" primary control-plane node in "scheduled-stop-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-160000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-160000" primary control-plane node in "scheduled-stop-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-28 12:00:31.631713 -0700 PDT m=+756.392844376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-160000 -n scheduled-stop-160000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-160000 -n scheduled-stop-160000: exit status 7 (66.636209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-160000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-160000
--- FAIL: TestScheduledStopUnix (10.18s)

                                                
                                    
x
+
TestSkaffold (16.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2383063130 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2383063130 version: (1.043237334s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-553000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-553000 --memory=2600 --driver=qemu2 : exit status 80 (9.818547291s)

                                                
                                                
-- stdout --
	* [skaffold-553000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-553000" primary control-plane node in "skaffold-553000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-553000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-553000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-553000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-553000" primary control-plane node in "skaffold-553000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-553000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-553000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-28 12:00:48.289957 -0700 PDT m=+773.050891751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-553000 -n skaffold-553000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-553000 -n skaffold-553000: exit status 7 (64.566958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-553000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-553000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-553000
--- FAIL: TestSkaffold (16.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (626.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2851898475 start -p running-upgrade-623000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2851898475 start -p running-upgrade-623000 --memory=2200 --vm-driver=qemu2 : (1m20.350049042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-623000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-623000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m25.8602425s)

                                                
                                                
-- stdout --
	* [running-upgrade-623000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-623000" primary control-plane node in "running-upgrade-623000" cluster
	* Updating the running qemu2 "running-upgrade-623000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:02:59.770590   17919 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:02:59.770706   17919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:02:59.770710   17919 out.go:304] Setting ErrFile to fd 2...
	I0328 12:02:59.770712   17919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:02:59.770829   17919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:02:59.771765   17919 out.go:298] Setting JSON to false
	I0328 12:02:59.789425   17919 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10951,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:02:59.789493   17919 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:02:59.794301   17919 out.go:177] * [running-upgrade-623000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:02:59.802289   17919 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:02:59.805329   17919 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:02:59.802364   17919 notify.go:220] Checking for updates...
	I0328 12:02:59.813278   17919 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:02:59.816368   17919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:02:59.817856   17919 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:02:59.821353   17919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:02:59.824649   17919 config.go:182] Loaded profile config "running-upgrade-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:02:59.828339   17919 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 12:02:59.831300   17919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:02:59.834362   17919 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:02:59.841235   17919 start.go:297] selected driver: qemu2
	I0328 12:02:59.841240   17919 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:02:59.841289   17919 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:02:59.843752   17919 cni.go:84] Creating CNI manager for ""
	I0328 12:02:59.843769   17919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:02:59.843793   17919 start.go:340] cluster config:
	{Name:running-upgrade-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:02:59.843840   17919 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:02:59.852333   17919 out.go:177] * Starting "running-upgrade-623000" primary control-plane node in "running-upgrade-623000" cluster
	I0328 12:02:59.856324   17919 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:02:59.856339   17919 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0328 12:02:59.856350   17919 cache.go:56] Caching tarball of preloaded images
	I0328 12:02:59.856400   17919 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:02:59.856405   17919 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0328 12:02:59.856468   17919 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/config.json ...
	I0328 12:02:59.856802   17919 start.go:360] acquireMachinesLock for running-upgrade-623000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:02:59.856826   17919 start.go:364] duration metric: took 18.792µs to acquireMachinesLock for "running-upgrade-623000"
	I0328 12:02:59.856834   17919 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:02:59.856839   17919 fix.go:54] fixHost starting: 
	I0328 12:02:59.857501   17919 fix.go:112] recreateIfNeeded on running-upgrade-623000: state=Running err=<nil>
	W0328 12:02:59.857509   17919 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:02:59.865331   17919 out.go:177] * Updating the running qemu2 "running-upgrade-623000" VM ...
	I0328 12:02:59.869314   17919 machine.go:94] provisionDockerMachine start ...
	I0328 12:02:59.869357   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:02:59.869480   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:02:59.869484   17919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 12:02:59.921535   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-623000
	
	I0328 12:02:59.921551   17919 buildroot.go:166] provisioning hostname "running-upgrade-623000"
	I0328 12:02:59.921613   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:02:59.921736   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:02:59.921742   17919 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-623000 && echo "running-upgrade-623000" | sudo tee /etc/hostname
	I0328 12:02:59.978469   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-623000
	
	I0328 12:02:59.978516   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:02:59.978625   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:02:59.978635   17919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-623000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-623000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-623000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 12:03:00.031941   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 12:03:00.031957   17919 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17877-15366/.minikube CaCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17877-15366/.minikube}
	I0328 12:03:00.031970   17919 buildroot.go:174] setting up certificates
	I0328 12:03:00.031975   17919 provision.go:84] configureAuth start
	I0328 12:03:00.031979   17919 provision.go:143] copyHostCerts
	I0328 12:03:00.032048   17919 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem, removing ...
	I0328 12:03:00.032055   17919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem
	I0328 12:03:00.032173   17919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem (1078 bytes)
	I0328 12:03:00.032348   17919 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem, removing ...
	I0328 12:03:00.032351   17919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem
	I0328 12:03:00.032400   17919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem (1123 bytes)
	I0328 12:03:00.032501   17919 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem, removing ...
	I0328 12:03:00.032505   17919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem
	I0328 12:03:00.032543   17919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem (1675 bytes)
	I0328 12:03:00.032630   17919 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-623000 san=[127.0.0.1 localhost minikube running-upgrade-623000]
	I0328 12:03:00.181355   17919 provision.go:177] copyRemoteCerts
	I0328 12:03:00.181408   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 12:03:00.181421   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:03:00.210586   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 12:03:00.217292   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 12:03:00.224283   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 12:03:00.230938   17919 provision.go:87] duration metric: took 198.9525ms to configureAuth
	I0328 12:03:00.230946   17919 buildroot.go:189] setting minikube options for container-runtime
	I0328 12:03:00.231051   17919 config.go:182] Loaded profile config "running-upgrade-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:03:00.231092   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:03:00.231185   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:03:00.231190   17919 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 12:03:00.284266   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 12:03:00.284275   17919 buildroot.go:70] root file system type: tmpfs
	I0328 12:03:00.284333   17919 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 12:03:00.284384   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:03:00.284485   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:03:00.284519   17919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 12:03:00.340782   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 12:03:00.340838   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:03:00.340945   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:03:00.340953   17919 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 12:03:00.395574   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 12:03:00.395586   17919 machine.go:97] duration metric: took 526.26ms to provisionDockerMachine
	I0328 12:03:00.395592   17919 start.go:293] postStartSetup for "running-upgrade-623000" (driver="qemu2")
	I0328 12:03:00.395598   17919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 12:03:00.395648   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 12:03:00.395663   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:03:00.422935   17919 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 12:03:00.424406   17919 info.go:137] Remote host: Buildroot 2021.02.12
	I0328 12:03:00.424413   17919 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/addons for local assets ...
	I0328 12:03:00.424477   17919 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/files for local assets ...
	I0328 12:03:00.424572   17919 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem -> 157842.pem in /etc/ssl/certs
	I0328 12:03:00.424658   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 12:03:00.427756   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:03:00.434786   17919 start.go:296] duration metric: took 39.188917ms for postStartSetup
	I0328 12:03:00.434799   17919 fix.go:56] duration metric: took 577.953959ms for fixHost
	I0328 12:03:00.434839   17919 main.go:141] libmachine: Using SSH client type: native
	I0328 12:03:00.434936   17919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9bf0] 0x1009bc450 <nil>  [] 0s} localhost 53135 <nil> <nil>}
	I0328 12:03:00.434940   17919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 12:03:00.486524   17919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711652580.423091683
	
	I0328 12:03:00.486532   17919 fix.go:216] guest clock: 1711652580.423091683
	I0328 12:03:00.486536   17919 fix.go:229] Guest: 2024-03-28 12:03:00.423091683 -0700 PDT Remote: 2024-03-28 12:03:00.4348 -0700 PDT m=+0.686755959 (delta=-11.708317ms)
	I0328 12:03:00.486547   17919 fix.go:200] guest clock delta is within tolerance: -11.708317ms
	I0328 12:03:00.486550   17919 start.go:83] releasing machines lock for "running-upgrade-623000", held for 629.712375ms
	I0328 12:03:00.486608   17919 ssh_runner.go:195] Run: cat /version.json
	I0328 12:03:00.486618   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:03:00.486609   17919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 12:03:00.486668   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	W0328 12:03:00.487186   17919 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53250->127.0.0.1:53135: read: connection reset by peer
	I0328 12:03:00.487202   17919 retry.go:31] will retry after 147.559776ms: ssh: handshake failed: read tcp 127.0.0.1:53250->127.0.0.1:53135: read: connection reset by peer
	W0328 12:03:00.665452   17919 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0328 12:03:00.665547   17919 ssh_runner.go:195] Run: systemctl --version
	I0328 12:03:00.667437   17919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 12:03:00.668952   17919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 12:03:00.668975   17919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0328 12:03:00.672273   17919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0328 12:03:00.676757   17919 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 12:03:00.676765   17919 start.go:494] detecting cgroup driver to use...
	I0328 12:03:00.676868   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:03:00.683950   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0328 12:03:00.687463   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 12:03:00.690382   17919 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 12:03:00.690408   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 12:03:00.693323   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:03:00.696645   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 12:03:00.700141   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:03:00.703768   17919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 12:03:00.707138   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 12:03:00.709994   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 12:03:00.712922   17919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 12:03:00.716300   17919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 12:03:00.718898   17919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 12:03:00.721508   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:00.812196   17919 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 12:03:00.822941   17919 start.go:494] detecting cgroup driver to use...
	I0328 12:03:00.823035   17919 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 12:03:00.828641   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:03:00.833147   17919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 12:03:00.839205   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:03:00.843978   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 12:03:00.848888   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:03:00.854434   17919 ssh_runner.go:195] Run: which cri-dockerd
	I0328 12:03:00.855828   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 12:03:00.858413   17919 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 12:03:00.863523   17919 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 12:03:00.954259   17919 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 12:03:01.034973   17919 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 12:03:01.035030   17919 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 12:03:01.040178   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:01.128274   17919 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:03:03.720402   17919 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5920825s)
	I0328 12:03:03.720470   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 12:03:03.727344   17919 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0328 12:03:03.733931   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:03:03.738661   17919 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 12:03:03.809246   17919 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 12:03:03.870639   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:03.957308   17919 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 12:03:03.963756   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:03:03.968466   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:04.046782   17919 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 12:03:04.086427   17919 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 12:03:04.086512   17919 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 12:03:04.088531   17919 start.go:562] Will wait 60s for crictl version
	I0328 12:03:04.088567   17919 ssh_runner.go:195] Run: which crictl
	I0328 12:03:04.090131   17919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 12:03:04.102882   17919 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0328 12:03:04.102953   17919 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:03:04.115945   17919 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:03:04.138219   17919 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0328 12:03:04.138307   17919 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0328 12:03:04.139626   17919 kubeadm.go:877] updating cluster {Name:running-upgrade-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0328 12:03:04.139671   17919 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:03:04.139714   17919 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:03:04.150053   17919 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:03:04.150062   17919 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:03:04.150112   17919 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:03:04.153119   17919 ssh_runner.go:195] Run: which lz4
	I0328 12:03:04.154514   17919 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0328 12:03:04.155755   17919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 12:03:04.155765   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0328 12:03:04.816313   17919 docker.go:649] duration metric: took 661.8225ms to copy over tarball
	I0328 12:03:04.816367   17919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 12:03:06.699375   17919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.882973209s)
	I0328 12:03:06.699390   17919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 12:03:06.714880   17919 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:03:06.717864   17919 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0328 12:03:06.722721   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:06.806707   17919 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:03:08.181338   17919 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.374599958s)
	I0328 12:03:08.181440   17919 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:03:08.192580   17919 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:03:08.192589   17919 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:03:08.192594   17919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 12:03:08.199687   17919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:03:08.199799   17919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:03:08.199876   17919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:03:08.199959   17919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:03:08.200094   17919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:03:08.200523   17919 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0328 12:03:08.200742   17919 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:03:08.201221   17919 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:03:08.209094   17919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:03:08.209452   17919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:03:08.209455   17919 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0328 12:03:08.209639   17919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:03:08.209811   17919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:03:08.209776   17919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:03:08.210056   17919 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:03:08.210062   17919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	W0328 12:03:10.172644   17919 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0328 12:03:10.173308   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:03:10.211107   17919 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0328 12:03:10.211156   17919 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:03:10.211247   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:03:10.223615   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0328 12:03:10.232965   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0328 12:03:10.233106   17919 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:03:10.246486   17919 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0328 12:03:10.246520   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0328 12:03:10.246626   17919 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0328 12:03:10.246646   17919 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:03:10.246693   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0328 12:03:10.259348   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:03:10.264100   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0328 12:03:10.286775   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:03:10.290391   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0328 12:03:10.293163   17919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0328 12:03:10.293180   17919 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:03:10.293222   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:03:10.299286   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:03:10.299555   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:03:10.299700   17919 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:03:10.299705   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0328 12:03:10.315024   17919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0328 12:03:10.315046   17919 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:03:10.315053   17919 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0328 12:03:10.315062   17919 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0328 12:03:10.315105   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0328 12:03:10.315105   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:03:10.322679   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0328 12:03:10.337170   17919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0328 12:03:10.337192   17919 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:03:10.337252   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:03:10.348186   17919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0328 12:03:10.348205   17919 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:03:10.348263   17919 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:03:10.420345   17919 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0328 12:03:10.420383   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0328 12:03:10.420403   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0328 12:03:10.420430   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0328 12:03:10.420384   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0328 12:03:10.420482   17919 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0328 12:03:10.421960   17919 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0328 12:03:10.421973   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0328 12:03:10.429266   17919 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0328 12:03:10.429276   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0328 12:03:10.456871   17919 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0328 12:03:10.670701   17919 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0328 12:03:10.671000   17919 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:03:10.704754   17919 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0328 12:03:10.704795   17919 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:03:10.704898   17919 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:03:11.866470   17919 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.161528s)
	I0328 12:03:11.866503   17919 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 12:03:11.866747   17919 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:03:11.871776   17919 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0328 12:03:11.871807   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0328 12:03:11.920603   17919 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:03:11.920616   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0328 12:03:12.179892   17919 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 12:03:12.179935   17919 cache_images.go:92] duration metric: took 3.987288125s to LoadCachedImages
	W0328 12:03:12.179977   17919 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0328 12:03:12.179985   17919 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0328 12:03:12.180035   17919 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-623000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 12:03:12.180109   17919 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 12:03:12.195720   17919 cni.go:84] Creating CNI manager for ""
	I0328 12:03:12.195733   17919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:03:12.195741   17919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 12:03:12.195751   17919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-623000 NodeName:running-upgrade-623000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 12:03:12.195818   17919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-623000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 12:03:12.195872   17919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0328 12:03:12.199836   17919 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 12:03:12.199868   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 12:03:12.202582   17919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0328 12:03:12.207511   17919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 12:03:12.212334   17919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0328 12:03:12.217548   17919 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0328 12:03:12.219070   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:03:12.304409   17919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:03:12.309402   17919 certs.go:68] Setting up /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000 for IP: 10.0.2.15
	I0328 12:03:12.309407   17919 certs.go:194] generating shared ca certs ...
	I0328 12:03:12.309416   17919 certs.go:226] acquiring lock for ca certs: {Name:mk77bea021df8758c6a5a63d76349b59be8fba89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:03:12.309632   17919 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key
	I0328 12:03:12.309664   17919 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key
	I0328 12:03:12.309668   17919 certs.go:256] generating profile certs ...
	I0328 12:03:12.309722   17919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.key
	I0328 12:03:12.309732   17919 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key.64f4487a
	I0328 12:03:12.309743   17919 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt.64f4487a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0328 12:03:12.356866   17919 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt.64f4487a ...
	I0328 12:03:12.356871   17919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt.64f4487a: {Name:mk8dd4df42830355de1caab19531a7291fdb5be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:03:12.357062   17919 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key.64f4487a ...
	I0328 12:03:12.357067   17919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key.64f4487a: {Name:mk3a9bb7c747d64e3071fd2608d60f0101a694d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:03:12.357177   17919 certs.go:381] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt.64f4487a -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt
	I0328 12:03:12.357286   17919 certs.go:385] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key.64f4487a -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key
	I0328 12:03:12.357392   17919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/proxy-client.key
	I0328 12:03:12.357495   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem (1338 bytes)
	W0328 12:03:12.357521   17919 certs.go:480] ignoring /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784_empty.pem, impossibly tiny 0 bytes
	I0328 12:03:12.357526   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 12:03:12.357544   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem (1078 bytes)
	I0328 12:03:12.357561   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem (1123 bytes)
	I0328 12:03:12.357577   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem (1675 bytes)
	I0328 12:03:12.357612   17919 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:03:12.357934   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 12:03:12.365051   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 12:03:12.372259   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 12:03:12.379531   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 12:03:12.386573   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 12:03:12.393069   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 12:03:12.400275   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 12:03:12.407851   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 12:03:12.415125   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 12:03:12.421797   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem --> /usr/share/ca-certificates/15784.pem (1338 bytes)
	I0328 12:03:12.428631   17919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /usr/share/ca-certificates/157842.pem (1708 bytes)
	I0328 12:03:12.435721   17919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 12:03:12.440576   17919 ssh_runner.go:195] Run: openssl version
	I0328 12:03:12.442356   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/157842.pem && ln -fs /usr/share/ca-certificates/157842.pem /etc/ssl/certs/157842.pem"
	I0328 12:03:12.445404   17919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/157842.pem
	I0328 12:03:12.446880   17919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 18:49 /usr/share/ca-certificates/157842.pem
	I0328 12:03:12.446908   17919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/157842.pem
	I0328 12:03:12.448498   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/157842.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 12:03:12.451312   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 12:03:12.454278   17919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:03:12.455720   17919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 19:02 /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:03:12.455741   17919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:03:12.457478   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 12:03:12.460598   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15784.pem && ln -fs /usr/share/ca-certificates/15784.pem /etc/ssl/certs/15784.pem"
	I0328 12:03:12.463766   17919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15784.pem
	I0328 12:03:12.465729   17919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 18:49 /usr/share/ca-certificates/15784.pem
	I0328 12:03:12.465756   17919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15784.pem
	I0328 12:03:12.467700   17919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15784.pem /etc/ssl/certs/51391683.0"
	I0328 12:03:12.470637   17919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 12:03:12.472143   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 12:03:12.474098   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 12:03:12.475951   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 12:03:12.477735   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 12:03:12.479618   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 12:03:12.481396   17919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 12:03:12.483308   17919 kubeadm.go:391] StartCluster: {Name:running-upgrade-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53167 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:03:12.483414   17919 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:03:12.493452   17919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 12:03:12.496881   17919 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 12:03:12.496887   17919 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 12:03:12.496890   17919 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 12:03:12.496910   17919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 12:03:12.500057   17919 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:03:12.500089   17919 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-623000" does not appear in /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:03:12.500104   17919 kubeconfig.go:62] /Users/jenkins/minikube-integration/17877-15366/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-623000" cluster setting kubeconfig missing "running-upgrade-623000" context setting]
	I0328 12:03:12.500258   17919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:03:12.501136   17919 kapi.go:59] client config for running-upgrade-623000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101caed60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:03:12.501955   17919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 12:03:12.504766   17919 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-623000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0328 12:03:12.504772   17919 kubeadm.go:1154] stopping kube-system containers ...
	I0328 12:03:12.504807   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:03:12.515434   17919 docker.go:483] Stopping containers: [e794e8089850 f683bea1f586 4b052f640d3d e9538dd7cae6 4a2ee84d2f88 95ea60112fdb 2e95f9e6067c cbfd7b3443e1 52e0bfbb6769 13b68fcf9326 34fa11726dcc 920aa6ef1fe7 2a4c8793cd50 c253749af967 ed0ffe976640]
	I0328 12:03:12.515508   17919 ssh_runner.go:195] Run: docker stop e794e8089850 f683bea1f586 4b052f640d3d e9538dd7cae6 4a2ee84d2f88 95ea60112fdb 2e95f9e6067c cbfd7b3443e1 52e0bfbb6769 13b68fcf9326 34fa11726dcc 920aa6ef1fe7 2a4c8793cd50 c253749af967 ed0ffe976640
	I0328 12:03:12.526263   17919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 12:03:12.625627   17919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:03:12.630239   17919 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 28 19:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 28 19:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 28 19:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 28 19:02 /etc/kubernetes/scheduler.conf
	
	I0328 12:03:12.630279   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf
	I0328 12:03:12.633826   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:03:12.633854   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:03:12.637314   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf
	I0328 12:03:12.640349   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:03:12.640371   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:03:12.643893   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf
	I0328 12:03:12.647309   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:03:12.647328   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:03:12.650588   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf
	I0328 12:03:12.653219   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:03:12.653241   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:03:12.655958   17919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:03:12.659055   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:03:12.680872   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:03:13.274441   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:03:13.498417   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:03:13.519102   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:03:13.544085   17919 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:03:13.544169   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:03:14.046524   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:03:14.546256   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:03:15.046262   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:03:15.051057   17919 api_server.go:72] duration metric: took 1.506955875s to wait for apiserver process to appear ...
	I0328 12:03:15.051067   17919 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:03:15.051079   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:20.053381   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:20.053453   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:25.054211   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:25.054294   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:30.055327   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:30.055428   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:35.056877   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:35.056960   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:40.058979   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:40.059062   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:45.061301   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:45.061384   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:50.063986   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:50.064040   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:03:55.065775   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:03:55.065843   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:00.068267   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:00.068293   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:05.069505   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:05.069620   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:10.072384   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:10.072469   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:15.074668   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:15.075105   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:15.108964   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:15.109100   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:15.128265   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:15.128356   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:15.142862   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:15.142939   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:15.155347   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:15.155420   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:15.166219   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:15.166288   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:15.176418   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:15.176484   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:15.186798   17919 logs.go:276] 0 containers: []
	W0328 12:04:15.186808   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:15.186863   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:15.205047   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:15.205066   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:15.205072   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:15.277078   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:15.277092   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:15.293242   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:15.293255   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:15.305603   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:15.305615   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:15.319972   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:15.319985   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:15.331190   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:15.331202   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:15.344754   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:15.344777   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:15.356034   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:15.356044   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:15.396028   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:15.396035   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:15.400228   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:15.400235   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:15.421506   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:15.421516   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:15.438379   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:15.438390   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:15.452341   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:15.452352   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:15.471311   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:15.471328   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:15.487210   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:15.487220   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:15.498383   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:15.498393   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:18.026755   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:23.029605   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:23.030106   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:23.067277   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:23.067401   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:23.092956   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:23.093045   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:23.107689   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:23.107762   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:23.119551   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:23.119618   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:23.129810   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:23.129873   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:23.140161   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:23.140220   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:23.150622   17919 logs.go:276] 0 containers: []
	W0328 12:04:23.150633   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:23.150683   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:23.160742   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:23.160762   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:23.160767   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:23.175319   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:23.175330   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:23.200540   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:23.200551   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:23.221346   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:23.221356   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:23.232400   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:23.232411   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:23.243630   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:23.243641   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:23.254904   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:23.254913   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:23.268453   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:23.268464   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:23.309480   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:23.309492   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:23.323745   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:23.323758   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:23.334944   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:23.334954   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:23.352327   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:23.352336   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:23.378572   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:23.378582   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:23.390925   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:23.390937   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:23.395802   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:23.395809   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:23.431858   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:23.431869   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:25.949890   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:30.951377   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:30.951933   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:30.993319   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:30.993456   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:31.015027   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:31.015142   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:31.038617   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:31.038687   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:31.049751   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:31.049822   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:31.061149   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:31.061220   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:31.074507   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:31.074581   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:31.085246   17919 logs.go:276] 0 containers: []
	W0328 12:04:31.085261   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:31.085320   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:31.095387   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:31.095402   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:31.095406   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:31.107260   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:31.107269   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:31.121059   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:31.121071   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:31.144692   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:31.144698   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:31.182983   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:31.182995   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:31.219136   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:31.219147   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:31.233088   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:31.233097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:31.253163   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:31.253174   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:31.265049   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:31.265062   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:31.276830   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:31.276845   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:31.294913   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:31.294922   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:31.306633   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:31.306643   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:31.311414   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:31.311420   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:31.325212   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:31.325221   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:31.342349   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:31.342359   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:31.353925   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:31.353937   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:33.869570   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:38.872111   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:38.872571   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:38.911780   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:38.911904   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:38.932096   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:38.932191   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:38.946816   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:38.946884   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:38.959276   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:38.959347   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:38.970505   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:38.970577   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:38.980971   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:38.981035   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:38.991015   17919 logs.go:276] 0 containers: []
	W0328 12:04:38.991026   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:38.991083   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:39.005743   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:39.005768   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:39.005773   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:39.019349   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:39.019359   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:39.034745   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:39.034757   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:39.046676   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:39.046686   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:39.064430   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:39.064440   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:39.075969   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:39.075977   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:39.088124   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:39.088136   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:39.093744   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:39.093755   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:39.111022   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:39.111035   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:39.122607   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:39.122618   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:39.135345   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:39.135356   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:39.155177   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:39.155188   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:39.181126   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:39.181135   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:39.219578   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:39.219593   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:39.256823   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:39.256836   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:39.270829   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:39.270839   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:41.794467   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:46.797473   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:46.797993   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:46.838158   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:46.838282   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:46.859384   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:46.859478   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:46.875588   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:46.875654   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:46.889858   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:46.889917   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:46.901064   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:46.901123   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:46.911498   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:46.911561   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:46.922403   17919 logs.go:276] 0 containers: []
	W0328 12:04:46.922415   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:46.922477   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:46.937314   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:46.937330   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:46.937336   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:46.951798   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:46.951809   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:46.969628   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:46.969637   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:47.003944   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:47.003956   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:47.024061   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:47.024074   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:47.039352   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:47.039363   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:47.051090   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:47.051101   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:47.062548   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:47.062557   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:47.074708   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:47.074718   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:47.092525   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:47.092535   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:47.106048   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:47.106060   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:47.117640   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:47.117650   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:47.141589   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:47.141598   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:47.179997   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:47.180011   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:47.184348   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:47.184355   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:47.198048   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:47.198057   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:49.710899   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:04:54.713332   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:04:54.713728   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:04:54.747845   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:04:54.747963   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:04:54.767484   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:04:54.767593   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:04:54.781888   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:04:54.781958   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:04:54.794154   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:04:54.794241   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:04:54.804851   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:04:54.804924   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:04:54.814921   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:04:54.814983   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:04:54.825322   17919 logs.go:276] 0 containers: []
	W0328 12:04:54.825333   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:04:54.825390   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:04:54.837518   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:04:54.837535   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:04:54.837540   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:04:54.841882   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:04:54.841888   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:04:54.855369   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:04:54.855380   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:04:54.872506   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:04:54.872517   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:04:54.883387   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:04:54.883399   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:04:54.921815   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:04:54.921824   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:04:54.956230   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:04:54.956245   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:04:54.970303   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:04:54.970312   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:04:54.985600   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:04:54.985613   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:04:54.996942   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:04:54.996952   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:04:55.020523   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:04:55.020535   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:04:55.037980   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:04:55.037992   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:04:55.050126   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:04:55.050136   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:04:55.076972   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:04:55.076984   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:04:55.088136   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:04:55.088147   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:04:55.107347   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:04:55.107361   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:04:57.622988   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:02.625469   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:02.625751   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:02.656039   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:02.656172   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:02.675823   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:02.675911   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:02.688944   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:02.689016   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:02.700306   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:02.700369   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:02.710441   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:02.710514   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:02.724986   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:02.725049   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:02.735224   17919 logs.go:276] 0 containers: []
	W0328 12:05:02.735235   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:02.735284   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:02.745255   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:02.745272   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:02.745278   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:02.786518   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:02.786536   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:02.791115   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:02.791122   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:02.804503   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:02.804515   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:02.818161   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:02.818172   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:02.841660   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:02.841675   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:02.852512   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:02.852522   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:02.863890   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:02.863901   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:02.888837   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:02.888845   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:02.900234   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:02.900244   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:02.936329   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:02.936341   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:02.949988   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:02.949998   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:02.967334   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:02.967347   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:02.978909   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:02.978922   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:02.996038   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:02.996048   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:03.007484   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:03.007498   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:05.524501   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:10.527141   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:10.527306   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:10.539570   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:10.539646   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:10.551467   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:10.551540   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:10.562340   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:10.562407   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:10.574591   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:10.574660   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:10.585555   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:10.585626   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:10.596679   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:10.596748   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:10.606918   17919 logs.go:276] 0 containers: []
	W0328 12:05:10.606928   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:10.606982   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:10.617898   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:10.617914   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:10.617919   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:10.635239   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:10.635248   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:10.646748   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:10.646757   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:10.666243   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:10.666252   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:10.690980   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:10.690987   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:10.730830   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:10.730836   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:10.745377   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:10.745390   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:10.765645   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:10.765658   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:10.779535   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:10.779544   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:10.784275   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:10.784282   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:10.795828   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:10.795839   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:10.809477   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:10.809489   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:10.821735   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:10.821747   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:10.864393   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:10.864408   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:10.880056   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:10.880069   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:10.891921   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:10.891931   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:13.405906   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:18.408328   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:18.408777   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:18.449572   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:18.449715   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:18.472928   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:18.473040   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:18.491360   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:18.491441   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:18.503633   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:18.503713   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:18.514158   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:18.514231   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:18.525115   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:18.525186   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:18.535645   17919 logs.go:276] 0 containers: []
	W0328 12:05:18.535655   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:18.535718   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:18.547134   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:18.547149   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:18.547155   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:18.560592   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:18.560604   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:18.564901   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:18.564909   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:18.578391   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:18.578402   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:18.590342   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:18.590352   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:18.603245   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:18.603255   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:18.620476   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:18.620486   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:18.645533   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:18.645540   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:18.657918   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:18.657929   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:18.697956   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:18.697963   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:18.734505   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:18.734515   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:18.754529   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:18.754540   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:18.772366   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:18.772377   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:18.785762   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:18.785774   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:18.799925   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:18.799938   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:18.815454   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:18.815464   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:21.329120   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:26.331535   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:26.331722   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:26.350980   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:26.351057   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:26.362403   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:26.362482   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:26.374623   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:26.374720   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:26.386261   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:26.386335   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:26.397563   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:26.397633   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:26.408357   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:26.408435   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:26.418652   17919 logs.go:276] 0 containers: []
	W0328 12:05:26.418670   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:26.418728   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:26.429338   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:26.429354   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:26.429360   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:26.449688   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:26.449699   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:26.467218   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:26.467231   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:26.478880   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:26.478893   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:26.494487   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:26.494500   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:26.515656   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:26.515668   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:26.521524   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:26.521530   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:26.536042   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:26.536054   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:26.556272   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:26.556284   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:26.582102   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:26.582109   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:26.593985   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:26.593996   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:26.605472   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:26.605483   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:26.618624   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:26.618636   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:26.634275   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:26.634286   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:26.650385   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:26.650402   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:26.689709   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:26.689716   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:29.232101   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:34.235121   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:34.235522   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:34.273081   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:34.273247   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:34.294958   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:34.295063   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:34.309619   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:34.309697   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:34.322129   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:34.322200   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:34.334076   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:34.334136   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:34.344489   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:34.344550   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:34.354478   17919 logs.go:276] 0 containers: []
	W0328 12:05:34.354489   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:34.354544   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:34.366099   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:34.366117   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:34.366123   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:34.405733   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:34.405742   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:34.422650   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:34.422663   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:34.435100   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:34.435110   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:34.471434   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:34.471445   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:34.485420   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:34.485429   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:34.497554   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:34.497565   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:34.514943   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:34.514953   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:34.519721   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:34.519728   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:34.539295   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:34.539305   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:34.552851   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:34.552861   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:34.564187   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:34.564201   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:34.589032   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:34.589043   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:34.600745   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:34.600756   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:34.612726   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:34.612739   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:34.627780   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:34.627790   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:37.150017   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:42.151749   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:42.151943   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:42.166253   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:42.166336   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:42.178262   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:42.178332   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:42.189270   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:42.189337   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:42.200128   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:42.200203   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:42.210702   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:42.210791   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:42.221267   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:42.221337   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:42.231149   17919 logs.go:276] 0 containers: []
	W0328 12:05:42.231163   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:42.231227   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:42.241923   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:42.241939   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:42.241946   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:42.256820   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:42.256831   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:42.268504   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:42.268519   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:42.272911   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:42.272918   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:42.286666   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:42.286680   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:42.322352   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:42.322365   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:42.336175   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:42.336189   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:42.356229   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:42.356240   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:42.370336   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:42.370346   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:42.387431   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:42.387439   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:42.398690   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:42.398700   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:42.412549   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:42.412563   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:42.452629   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:42.452639   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:42.466547   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:42.466557   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:42.484227   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:42.484242   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:42.508342   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:42.508348   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:45.021853   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:50.024255   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:50.024654   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:50.067373   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:50.067512   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:50.088757   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:50.088870   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:50.103463   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:50.103543   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:50.125315   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:50.125388   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:50.135912   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:50.135982   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:50.146227   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:50.146299   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:50.165184   17919 logs.go:276] 0 containers: []
	W0328 12:05:50.165193   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:50.165255   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:50.175773   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:50.175794   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:50.175802   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:50.196077   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:50.196088   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:50.217518   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:50.217533   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:50.228824   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:50.228836   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:50.269268   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:50.269276   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:50.305917   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:50.305927   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:50.320189   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:50.320199   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:50.335785   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:50.335794   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:50.349411   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:50.349419   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:50.373706   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:50.373713   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:50.392907   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:50.392917   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:05:50.404827   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:50.404838   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:50.416411   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:50.416427   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:50.420683   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:50.420689   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:50.432135   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:50.432144   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:50.443297   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:50.443310   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:52.968868   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:05:57.971650   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:05:57.972085   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:05:58.008160   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:05:58.008287   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:05:58.030074   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:05:58.030181   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:05:58.044659   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:05:58.044730   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:05:58.056700   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:05:58.056776   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:05:58.067259   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:05:58.067325   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:05:58.087648   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:05:58.087718   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:05:58.104630   17919 logs.go:276] 0 containers: []
	W0328 12:05:58.104642   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:05:58.104695   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:05:58.118273   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:05:58.118293   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:05:58.118299   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:05:58.155469   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:05:58.155482   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:05:58.174553   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:05:58.174565   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:05:58.185662   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:05:58.185672   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:05:58.202280   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:05:58.202290   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:05:58.214147   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:05:58.214160   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:05:58.227304   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:05:58.227313   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:05:58.239108   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:05:58.239119   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:05:58.259704   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:05:58.259714   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:05:58.284068   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:05:58.284078   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:05:58.298250   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:05:58.298260   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:05:58.311764   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:05:58.311774   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:05:58.328962   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:05:58.328973   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:05:58.368683   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:05:58.368692   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:05:58.372846   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:05:58.372851   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:05:58.383999   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:05:58.384010   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:00.900003   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:05.902387   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:05.902965   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:05.943337   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:05.943465   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:05.963209   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:05.963325   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:05.978093   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:05.978178   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:05.990168   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:05.990237   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:06.000950   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:06.001018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:06.011562   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:06.011639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:06.021452   17919 logs.go:276] 0 containers: []
	W0328 12:06:06.021465   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:06.021537   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:06.032223   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:06.032242   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:06.032254   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:06.046143   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:06.046155   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:06.061432   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:06.061445   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:06.072795   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:06.072807   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:06.090553   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:06.090562   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:06.107898   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:06.107911   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:06.111970   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:06.111978   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:06.125761   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:06.125771   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:06.143087   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:06.143097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:06.158088   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:06.158097   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:06.197569   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:06.197580   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:06.218441   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:06.218454   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:06.230095   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:06.230106   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:06.242218   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:06.242231   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:06.283455   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:06.283468   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:06.297086   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:06.297099   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:08.822233   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:13.824636   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:13.824867   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:13.842441   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:13.842528   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:13.855902   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:13.855979   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:13.868019   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:13.868084   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:13.884942   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:13.885018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:13.895289   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:13.895348   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:13.905933   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:13.906004   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:13.921003   17919 logs.go:276] 0 containers: []
	W0328 12:06:13.921013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:13.921065   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:13.937048   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:13.937068   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:13.937073   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:13.962190   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:13.962205   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:13.977487   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:13.977497   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:13.988735   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:13.988748   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:14.023236   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:14.023247   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:14.048581   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:14.048592   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:14.062327   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:14.062340   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:14.079720   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:14.079731   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:14.091373   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:14.091386   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:14.111674   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:14.111686   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:14.125049   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:14.125057   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:14.129198   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:14.129203   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:14.151627   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:14.151633   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:14.189960   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:14.189966   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:14.203666   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:14.203678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:14.223632   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:14.223641   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:16.741507   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:21.743533   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:21.743608   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:21.754998   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:21.755074   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:21.765693   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:21.765763   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:21.777265   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:21.777340   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:21.788490   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:21.788570   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:21.799306   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:21.799381   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:21.813240   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:21.813308   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:21.824798   17919 logs.go:276] 0 containers: []
	W0328 12:06:21.824810   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:21.824870   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:21.835125   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:21.835143   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:21.835149   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:21.840068   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:21.840077   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:21.858363   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:21.858374   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:21.881684   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:21.881700   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:21.896548   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:21.896558   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:21.916233   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:21.916244   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:21.931633   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:21.931646   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:21.943208   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:21.943218   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:21.979625   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:21.979635   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:21.993952   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:21.993962   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:22.010221   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:22.010232   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:22.038646   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:22.038657   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:22.052116   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:22.052128   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:22.092093   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:22.092103   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:22.107213   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:22.107225   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:22.118601   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:22.118611   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:24.639396   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:29.641734   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:29.641942   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:29.663147   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:29.663250   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:29.679025   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:29.679136   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:29.692382   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:29.692439   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:29.705559   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:29.705634   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:29.717909   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:29.717988   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:29.729511   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:29.729580   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:29.741160   17919 logs.go:276] 0 containers: []
	W0328 12:06:29.741172   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:29.741231   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:29.752273   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:29.752293   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:29.752299   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:29.779427   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:29.779441   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:29.794634   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:29.794645   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:29.810447   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:29.810459   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:29.832932   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:29.832942   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:29.844894   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:29.844905   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:29.882370   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:29.882382   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:29.903482   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:29.903494   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:29.915947   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:29.915959   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:29.940324   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:29.940335   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:29.982557   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:29.982567   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:29.995399   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:29.995410   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:30.016529   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:30.016539   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:30.020752   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:30.020759   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:30.031821   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:30.031835   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:30.044396   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:30.044412   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:32.564313   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:37.566635   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:37.566783   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:37.584438   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:37.584516   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:37.596891   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:37.596963   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:37.607464   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:37.607523   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:37.618015   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:37.618075   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:37.632221   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:37.632291   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:37.642556   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:37.642612   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:37.653539   17919 logs.go:276] 0 containers: []
	W0328 12:06:37.653551   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:37.653618   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:37.669082   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:37.669097   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:37.669104   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:37.681502   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:37.681513   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:37.695473   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:37.695486   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:37.707323   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:37.707334   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:37.711972   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:37.711980   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:37.723163   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:37.723175   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:37.739001   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:37.739013   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:37.778440   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:37.778453   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:37.790666   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:37.790677   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:37.805579   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:37.805590   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:37.830977   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:37.830989   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:37.852597   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:37.852607   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:37.876240   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:37.876248   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:37.888018   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:37.888031   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:37.922529   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:37.922538   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:37.937345   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:37.937356   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:40.457958   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:45.460378   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:45.460636   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:45.484321   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:45.484425   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:45.500026   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:45.500104   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:45.514063   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:45.514131   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:45.525813   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:45.525883   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:45.535958   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:45.536025   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:45.546161   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:45.546226   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:45.555743   17919 logs.go:276] 0 containers: []
	W0328 12:06:45.555756   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:45.555814   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:45.566157   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:45.566173   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:45.566178   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:45.605722   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:45.605736   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:45.623362   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:45.623375   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:45.645810   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:45.645820   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:45.683990   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:45.684000   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:45.698330   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:45.698341   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:45.712243   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:45.712254   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:45.717084   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:45.717093   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:45.730871   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:45.730882   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:45.755838   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:45.755849   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:45.767758   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:45.767777   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:45.783603   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:45.783614   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:45.795242   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:45.795253   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:45.813105   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:45.813117   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:45.824637   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:45.824648   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:45.836388   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:45.836399   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:48.351984   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:53.354254   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:53.354466   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:53.379720   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:53.379841   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:53.396539   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:53.396643   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:53.410066   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:53.410136   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:53.421247   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:53.421321   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:53.432058   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:53.432134   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:53.442862   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:53.442932   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:53.452678   17919 logs.go:276] 0 containers: []
	W0328 12:06:53.452689   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:53.452750   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:53.463187   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:53.463204   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:53.463210   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:53.502811   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:53.502819   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:53.540849   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:53.540859   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:53.563455   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:53.563465   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:53.575040   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:53.575051   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:53.590075   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:53.590087   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:53.601947   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:53.601961   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:53.613266   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:53.613276   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:53.635981   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:53.635988   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:53.647312   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:53.647322   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:53.661038   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:53.661047   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:53.680525   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:53.680535   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:53.694536   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:53.694545   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:53.699027   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:53.699034   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:53.717586   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:53.717597   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:53.729207   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:53.729218   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:56.248949   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:01.251478   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:01.251877   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:01.287512   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:07:01.287628   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:01.305093   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:07:01.305183   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:01.319503   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:07:01.319577   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:01.331793   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:07:01.331859   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:01.342777   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:07:01.342842   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:01.353780   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:07:01.353850   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:01.364831   17919 logs.go:276] 0 containers: []
	W0328 12:07:01.364844   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:01.364903   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:01.375311   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:07:01.375329   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:07:01.375335   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:07:01.387185   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:07:01.387197   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:07:01.402909   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:07:01.402920   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:07:01.414943   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:07:01.414953   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:07:01.428298   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:07:01.428310   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:07:01.448692   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:07:01.448703   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:07:01.462940   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:07:01.462951   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:07:01.474483   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:01.474493   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:01.479157   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:07:01.479164   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:07:01.491027   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:07:01.491039   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:07:01.508946   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:01.508957   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:01.532238   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:01.532246   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:01.572807   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:07:01.572833   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:07:01.588759   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:07:01.588770   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:07:01.607627   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:07:01.607639   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:01.621546   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:01.621557   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:04.159459   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:09.161235   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:09.161422   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:09.185520   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:07:09.185619   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:09.199346   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:07:09.199424   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:09.210553   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:07:09.210620   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:09.221267   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:07:09.221331   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:09.232217   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:07:09.232302   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:09.242995   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:07:09.243066   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:09.253648   17919 logs.go:276] 0 containers: []
	W0328 12:07:09.253660   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:09.253719   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:09.263778   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:07:09.263796   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:07:09.263801   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:07:09.278109   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:07:09.278121   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:07:09.292987   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:07:09.292997   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:07:09.304763   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:09.304774   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:09.309273   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:09.309280   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:09.346221   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:07:09.346231   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:07:09.358094   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:09.358105   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:09.381103   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:07:09.381110   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:09.392485   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:09.392498   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:09.432674   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:07:09.432688   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:07:09.453182   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:07:09.453196   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:07:09.472414   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:07:09.472426   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:07:09.486920   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:07:09.486933   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:07:09.504432   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:07:09.504442   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:07:09.517884   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:07:09.517894   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:07:09.529869   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:07:09.529881   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:07:12.044007   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:17.046374   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:17.046491   17919 kubeadm.go:591] duration metric: took 4m4.546710208s to restartPrimaryControlPlane
	W0328 12:07:17.046564   17919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 12:07:17.046591   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0328 12:07:18.057713   17919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.011098041s)
	I0328 12:07:18.057774   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 12:07:18.062611   17919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:07:18.065644   17919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:07:18.068427   17919 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:07:18.068433   17919 kubeadm.go:156] found existing configuration files:
	
	I0328 12:07:18.068455   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf
	I0328 12:07:18.071169   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:07:18.071192   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:07:18.074602   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf
	I0328 12:07:18.077628   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:07:18.077650   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:07:18.080056   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf
	I0328 12:07:18.082933   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:07:18.082958   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:07:18.085703   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf
	I0328 12:07:18.088094   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:07:18.088113   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:07:18.091034   17919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 12:07:18.108775   17919 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0328 12:07:18.109029   17919 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 12:07:18.164695   17919 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 12:07:18.164761   17919 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 12:07:18.164809   17919 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 12:07:18.214949   17919 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 12:07:18.220041   17919 out.go:204]   - Generating certificates and keys ...
	I0328 12:07:18.220075   17919 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 12:07:18.220107   17919 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 12:07:18.220149   17919 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 12:07:18.220191   17919 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 12:07:18.220227   17919 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 12:07:18.220259   17919 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 12:07:18.220297   17919 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 12:07:18.220334   17919 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 12:07:18.220370   17919 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 12:07:18.220406   17919 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 12:07:18.220424   17919 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 12:07:18.220448   17919 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 12:07:18.436976   17919 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 12:07:18.521940   17919 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 12:07:18.603576   17919 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 12:07:18.762835   17919 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 12:07:18.793821   17919 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 12:07:18.794357   17919 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 12:07:18.794386   17919 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 12:07:18.884729   17919 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 12:07:18.889337   17919 out.go:204]   - Booting up control plane ...
	I0328 12:07:18.889384   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 12:07:18.889415   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 12:07:18.889443   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 12:07:18.889483   17919 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 12:07:18.889574   17919 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 12:07:23.388573   17919 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501509 seconds
	I0328 12:07:23.388640   17919 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 12:07:23.393410   17919 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 12:07:23.905423   17919 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 12:07:23.905602   17919 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-623000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 12:07:24.409351   17919 kubeadm.go:309] [bootstrap-token] Using token: laorjf.a9sshcpx4y1fhue1
	I0328 12:07:24.412933   17919 out.go:204]   - Configuring RBAC rules ...
	I0328 12:07:24.412993   17919 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 12:07:24.413056   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 12:07:24.414772   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 12:07:24.417203   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 12:07:24.418073   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 12:07:24.418962   17919 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 12:07:24.421831   17919 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 12:07:24.609290   17919 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 12:07:24.814315   17919 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 12:07:24.814893   17919 kubeadm.go:309] 
	I0328 12:07:24.814928   17919 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 12:07:24.814936   17919 kubeadm.go:309] 
	I0328 12:07:24.814979   17919 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 12:07:24.814982   17919 kubeadm.go:309] 
	I0328 12:07:24.814995   17919 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 12:07:24.815026   17919 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 12:07:24.815056   17919 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 12:07:24.815060   17919 kubeadm.go:309] 
	I0328 12:07:24.815094   17919 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 12:07:24.815099   17919 kubeadm.go:309] 
	I0328 12:07:24.815123   17919 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 12:07:24.815125   17919 kubeadm.go:309] 
	I0328 12:07:24.815150   17919 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 12:07:24.815188   17919 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 12:07:24.815232   17919 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 12:07:24.815236   17919 kubeadm.go:309] 
	I0328 12:07:24.815279   17919 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 12:07:24.815326   17919 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 12:07:24.815329   17919 kubeadm.go:309] 
	I0328 12:07:24.815378   17919 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token laorjf.a9sshcpx4y1fhue1 \
	I0328 12:07:24.815431   17919 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b \
	I0328 12:07:24.815444   17919 kubeadm.go:309] 	--control-plane 
	I0328 12:07:24.815447   17919 kubeadm.go:309] 
	I0328 12:07:24.815494   17919 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 12:07:24.815498   17919 kubeadm.go:309] 
	I0328 12:07:24.815538   17919 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token laorjf.a9sshcpx4y1fhue1 \
	I0328 12:07:24.815609   17919 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b 
	I0328 12:07:24.815671   17919 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 12:07:24.815678   17919 cni.go:84] Creating CNI manager for ""
	I0328 12:07:24.815685   17919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:07:24.818556   17919 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 12:07:24.824406   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 12:07:24.827550   17919 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 12:07:24.832718   17919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 12:07:24.832775   17919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 12:07:24.832784   17919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-623000 minikube.k8s.io/updated_at=2024_03_28T12_07_24_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967 minikube.k8s.io/name=running-upgrade-623000 minikube.k8s.io/primary=true
	I0328 12:07:24.881844   17919 ops.go:34] apiserver oom_adj: -16
	I0328 12:07:24.881855   17919 kubeadm.go:1107] duration metric: took 49.113167ms to wait for elevateKubeSystemPrivileges
	W0328 12:07:24.881895   17919 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 12:07:24.881901   17919 kubeadm.go:393] duration metric: took 4m12.395622541s to StartCluster
	I0328 12:07:24.881910   17919 settings.go:142] acquiring lock: {Name:mkfc1d043149af7cff65561e827dba55cefba229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:07:24.882085   17919 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:07:24.882551   17919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:07:24.882728   17919 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:07:24.886472   17919 out.go:177] * Verifying Kubernetes components...
	I0328 12:07:24.882787   17919 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 12:07:24.882937   17919 config.go:182] Loaded profile config "running-upgrade-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:07:24.894275   17919 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-623000"
	I0328 12:07:24.894287   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:07:24.894277   17919 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-623000"
	I0328 12:07:24.894297   17919 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-623000"
	W0328 12:07:24.894301   17919 addons.go:243] addon storage-provisioner should already be in state true
	I0328 12:07:24.894304   17919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-623000"
	I0328 12:07:24.894344   17919 host.go:66] Checking if "running-upgrade-623000" exists ...
	I0328 12:07:24.895593   17919 kapi.go:59] client config for running-upgrade-623000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101caed60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:07:24.895794   17919 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-623000"
	W0328 12:07:24.895800   17919 addons.go:243] addon default-storageclass should already be in state true
	I0328 12:07:24.895807   17919 host.go:66] Checking if "running-upgrade-623000" exists ...
	I0328 12:07:24.899453   17919 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:07:24.903429   17919 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:07:24.903434   17919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 12:07:24.903440   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:07:24.904147   17919 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 12:07:24.904152   17919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 12:07:24.904156   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:07:24.988781   17919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:07:24.993471   17919 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:07:24.993515   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:07:24.998011   17919 api_server.go:72] duration metric: took 115.271708ms to wait for apiserver process to appear ...
	I0328 12:07:24.998020   17919 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:07:24.998026   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:25.023114   17919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 12:07:25.024427   17919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:07:30.000285   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:30.000360   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:35.001573   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:35.001621   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:40.002238   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:40.002262   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:45.003011   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:45.003034   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:50.003963   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:50.004021   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:55.005279   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:55.005309   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0328 12:07:55.371440   17919 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0328 12:07:55.376617   17919 out.go:177] * Enabled addons: storage-provisioner
	I0328 12:07:55.384656   17919 addons.go:505] duration metric: took 30.501517542s for enable addons: enabled=[storage-provisioner]
	I0328 12:08:00.006877   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:00.006952   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:05.009318   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:05.009337   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:10.011550   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:10.011581   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:15.013892   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:15.013931   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:20.015775   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:20.015808   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:25.018085   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:25.018216   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:25.030070   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:25.030154   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:25.040184   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:25.040250   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:25.059749   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:25.059825   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:25.088333   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:25.088417   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:25.108443   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:25.108521   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:25.118643   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:25.118711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:25.128319   17919 logs.go:276] 0 containers: []
	W0328 12:08:25.128332   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:25.128390   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:25.138513   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:25.138529   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:25.138535   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:25.174427   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:25.174439   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:25.188441   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:25.188454   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:25.202200   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:25.202209   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:25.214270   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:25.214283   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:25.225542   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:25.225552   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:25.245775   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:25.245788   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:25.268625   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:25.268637   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:25.272990   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:25.272995   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:25.306511   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:25.306525   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:25.321881   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:25.321892   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:25.337917   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:25.337927   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:25.349808   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:25.349822   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:27.861908   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:32.864307   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:32.864438   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:32.876212   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:32.876286   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:32.887528   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:32.887606   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:32.898539   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:32.898608   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:32.909162   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:32.909231   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:32.922515   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:32.922587   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:32.932993   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:32.933062   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:32.943189   17919 logs.go:276] 0 containers: []
	W0328 12:08:32.943203   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:32.943255   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:32.953389   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:32.953408   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:32.953414   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:32.957737   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:32.957747   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:32.972187   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:32.972198   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:32.984195   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:32.984206   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:32.995796   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:32.995807   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:33.010171   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:33.010186   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:33.034561   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:33.034570   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:33.046935   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:33.046945   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:33.083412   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:33.083421   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:33.118123   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:33.118137   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:33.131938   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:33.131949   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:33.144085   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:33.144097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:33.162369   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:33.162382   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:35.677225   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:40.678132   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:40.678462   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:40.708678   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:40.708803   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:40.726383   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:40.726475   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:40.739751   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:40.739829   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:40.752086   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:40.752154   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:40.762601   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:40.762679   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:40.772892   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:40.772964   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:40.782734   17919 logs.go:276] 0 containers: []
	W0328 12:08:40.782744   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:40.782799   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:40.793778   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:40.793794   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:40.793800   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:40.812702   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:40.812713   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:40.824339   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:40.824352   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:40.860520   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:40.860529   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:40.864996   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:40.865003   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:40.903409   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:40.903422   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:40.918034   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:40.918044   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:40.929946   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:40.929957   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:40.954009   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:40.954017   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:40.968485   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:40.968495   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:40.980003   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:40.980016   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:40.992314   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:40.992324   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:41.007176   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:41.007187   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:43.523077   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:48.525485   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:48.525686   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:48.545069   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:48.545164   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:48.559335   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:48.559407   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:48.573361   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:48.573429   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:48.584370   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:48.584446   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:48.594844   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:48.594905   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:48.604694   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:48.604752   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:48.617002   17919 logs.go:276] 0 containers: []
	W0328 12:08:48.617013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:48.617071   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:48.627566   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:48.627580   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:48.627586   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:48.664486   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:48.664501   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:48.669295   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:48.669305   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:48.706120   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:48.706131   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:48.720670   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:48.720680   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:48.733352   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:48.733366   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:48.744625   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:48.744640   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:48.759350   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:48.759362   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:48.771139   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:48.771149   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:48.788503   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:48.788513   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:48.812842   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:48.812851   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:48.824906   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:48.824917   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:48.839597   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:48.839606   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:51.352977   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:56.355241   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:56.355366   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:56.366185   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:56.366265   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:56.376661   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:56.376731   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:56.387116   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:56.387191   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:56.397124   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:56.397189   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:56.407613   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:56.407691   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:56.418345   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:56.418424   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:56.428388   17919 logs.go:276] 0 containers: []
	W0328 12:08:56.428399   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:56.428457   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:56.438831   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:56.438845   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:56.438851   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:56.450634   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:56.450647   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:56.463173   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:56.463184   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:56.478965   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:56.478976   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:56.493135   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:56.493145   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:56.504973   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:56.504983   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:56.516825   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:56.516837   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:56.536687   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:56.536701   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:56.561222   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:56.561234   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:56.596844   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:56.596853   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:56.601031   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:56.601037   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:56.634374   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:56.634388   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:56.650031   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:56.650042   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:59.169239   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:04.170387   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:04.170558   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:04.183282   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:04.183354   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:04.194354   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:04.194426   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:04.205026   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:04.205101   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:04.215459   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:04.215526   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:04.231055   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:04.231127   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:04.242065   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:04.242129   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:04.252473   17919 logs.go:276] 0 containers: []
	W0328 12:09:04.252483   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:04.252537   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:04.263376   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:04.263392   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:04.263398   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:04.297016   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:04.297023   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:04.308572   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:04.308587   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:04.321573   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:04.321585   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:04.346199   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:04.346207   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:04.365284   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:04.365295   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:04.382296   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:04.382306   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:04.403270   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:04.403284   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:04.408329   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:04.408335   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:04.442134   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:04.442144   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:04.456456   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:04.456468   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:04.469926   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:04.469936   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:04.481228   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:04.481239   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:06.994364   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:11.996746   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:11.996944   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:12.022733   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:12.022853   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:12.037065   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:12.037145   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:12.048968   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:12.049043   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:12.059872   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:12.059936   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:12.070696   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:12.070769   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:12.081755   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:12.081822   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:12.097716   17919 logs.go:276] 0 containers: []
	W0328 12:09:12.097732   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:12.097787   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:12.107942   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:12.107957   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:12.107962   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:12.125640   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:12.125650   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:12.160252   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:12.160266   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:12.195412   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:12.195424   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:12.209663   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:12.209676   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:12.221972   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:12.221984   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:12.240994   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:12.241003   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:12.254781   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:12.254790   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:12.261915   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:12.261925   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:12.277717   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:12.277730   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:12.297130   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:12.297143   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:12.308522   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:12.308532   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:12.332635   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:12.332643   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:14.846169   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:19.848546   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:19.848661   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:19.860423   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:19.860494   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:19.871318   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:19.871383   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:19.881567   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:19.881639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:19.894087   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:19.894149   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:19.904445   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:19.904520   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:19.914887   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:19.914960   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:19.929149   17919 logs.go:276] 0 containers: []
	W0328 12:09:19.929159   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:19.929219   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:19.944773   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:19.944790   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:19.944798   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:19.980459   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:19.980471   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:19.995341   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:19.995352   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:20.007210   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:20.007221   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:20.027181   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:20.027191   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:20.045344   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:20.045353   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:20.068948   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:20.068958   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:20.103321   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:20.103331   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:20.108589   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:20.108599   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:20.123481   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:20.123491   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:20.136292   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:20.136302   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:20.152064   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:20.152079   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:20.163825   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:20.163837   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:22.677345   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:27.679750   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:27.679933   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:27.692925   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:27.692999   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:27.703916   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:27.703986   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:27.714440   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:27.714506   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:27.730303   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:27.730373   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:27.740833   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:27.740903   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:27.751309   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:27.751377   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:27.761404   17919 logs.go:276] 0 containers: []
	W0328 12:09:27.761418   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:27.761483   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:27.771627   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:27.771644   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:27.771650   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:27.782807   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:27.782817   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:27.798276   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:27.798285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:27.819163   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:27.819173   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:27.830953   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:27.830962   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:27.866056   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:27.866068   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:27.870512   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:27.870520   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:27.904697   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:27.904710   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:27.919662   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:27.919671   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:27.931759   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:27.931771   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:27.946281   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:27.946292   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:27.957603   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:27.957615   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:27.968957   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:27.968968   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:30.494558   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:35.495499   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:35.495711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:35.516593   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:35.516700   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:35.530910   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:35.530985   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:35.542828   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:35.542887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:35.553193   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:35.553268   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:35.563878   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:35.563939   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:35.574976   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:35.575042   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:35.585307   17919 logs.go:276] 0 containers: []
	W0328 12:09:35.585319   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:35.585378   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:35.595747   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:35.595761   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:35.595766   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:35.606979   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:35.606990   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:35.625448   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:35.625457   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:35.637096   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:35.637106   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:35.672745   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:35.672753   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:35.677157   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:35.677164   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:35.691477   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:35.691487   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:35.705830   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:35.705844   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:35.717272   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:35.717283   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:35.741954   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:35.741961   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:35.782054   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:35.782067   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:35.794480   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:35.794490   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:35.812233   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:35.812243   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:38.324115   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:43.326513   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:43.326607   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:43.337789   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:43.337859   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:43.348261   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:43.348338   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:43.358681   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:43.358754   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:43.368856   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:43.368922   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:43.379502   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:43.379577   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:43.397921   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:43.397994   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:43.408043   17919 logs.go:276] 0 containers: []
	W0328 12:09:43.408055   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:43.408108   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:43.422481   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:43.422502   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:43.422507   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:43.440795   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:43.440805   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:43.452564   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:43.452575   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:43.466549   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:43.466558   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:43.478247   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:43.478258   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:43.492843   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:43.492855   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:43.504616   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:43.504625   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:43.516962   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:43.516971   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:43.528988   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:43.528997   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:43.546542   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:43.546553   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:43.557957   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:43.557969   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:43.595604   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:43.595615   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:43.600182   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:43.600191   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:43.611223   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:43.611235   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:43.636512   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:43.636520   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:46.172568   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:51.175016   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:51.175245   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:51.191410   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:51.191493   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:51.203745   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:51.203816   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:51.214762   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:51.214836   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:51.225736   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:51.225815   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:51.236718   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:51.236788   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:51.250496   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:51.250564   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:51.260560   17919 logs.go:276] 0 containers: []
	W0328 12:09:51.260572   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:51.260637   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:51.271026   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:51.271042   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:51.271048   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:51.306657   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:51.306667   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:51.325021   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:51.325030   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:51.338668   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:51.338678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:51.350548   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:51.350560   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:51.362752   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:51.362763   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:51.380130   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:51.380141   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:51.398224   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:51.398234   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:51.402528   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:51.402537   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:51.413963   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:51.413971   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:51.425028   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:51.425041   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:51.436151   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:51.436162   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:51.455020   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:51.455031   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:51.490196   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:51.490204   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:51.514277   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:51.514284   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:54.028034   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:59.030632   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:59.030791   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:59.045943   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:59.046026   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:59.058912   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:59.058986   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:59.070423   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:59.070491   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:59.081276   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:59.081342   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:59.091524   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:59.091596   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:59.101611   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:59.101677   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:59.112002   17919 logs.go:276] 0 containers: []
	W0328 12:09:59.112013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:59.112070   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:59.123088   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:59.123105   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:59.123110   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:59.137754   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:59.137763   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:59.149626   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:59.149637   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:59.175654   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:59.175663   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:59.210244   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:59.210252   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:59.245818   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:59.245829   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:59.257578   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:59.257589   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:59.268986   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:59.268995   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:59.282739   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:59.282749   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:59.298307   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:59.298318   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:59.310050   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:59.310061   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:59.314392   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:59.314400   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:59.328830   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:59.328839   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:59.350088   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:59.350098   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:59.366074   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:59.366085   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:01.881214   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:06.883543   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:06.883663   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:06.897657   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:06.897744   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:06.909250   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:06.909332   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:06.919865   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:06.919942   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:06.931292   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:06.931375   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:06.942538   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:06.942614   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:06.953301   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:06.953380   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:06.963513   17919 logs.go:276] 0 containers: []
	W0328 12:10:06.963526   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:06.963599   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:06.974720   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:06.974736   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:06.974741   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:07.009886   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:07.009900   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:07.014453   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:07.014460   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:07.029040   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:07.029050   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:07.041054   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:07.041064   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:07.065871   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:07.065879   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:07.078085   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:07.078097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:07.090212   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:07.090222   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:07.106310   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:07.106320   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:07.121362   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:07.121377   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:07.132962   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:07.132973   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:07.144425   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:07.144436   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:07.180495   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:07.180505   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:07.195782   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:07.195792   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:07.211200   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:07.211211   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:09.731236   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:14.733612   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:14.733855   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:14.764007   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:14.764124   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:14.779854   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:14.781018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:14.793721   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:14.793795   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:14.810050   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:14.810119   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:14.820465   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:14.820538   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:14.839117   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:14.839178   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:14.849323   17919 logs.go:276] 0 containers: []
	W0328 12:10:14.849336   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:14.849400   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:14.859846   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:14.859862   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:14.859868   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:14.877059   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:14.877069   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:14.881483   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:14.881492   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:14.892822   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:14.892832   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:14.904610   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:14.904620   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:14.917150   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:14.917161   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:14.928954   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:14.928964   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:14.946774   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:14.946787   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:14.958554   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:14.958566   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:14.984374   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:14.984384   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:15.021002   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:15.021013   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:15.032795   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:15.032805   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:15.044457   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:15.044467   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:15.078662   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:15.078672   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:15.092919   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:15.092929   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:17.608938   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:22.611452   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:22.611803   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:22.639804   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:22.639926   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:22.658782   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:22.658877   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:22.677815   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:22.677887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:22.688958   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:22.689024   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:22.699729   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:22.699789   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:22.710999   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:22.711075   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:22.720733   17919 logs.go:276] 0 containers: []
	W0328 12:10:22.720745   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:22.720799   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:22.731283   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:22.731301   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:22.731306   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:22.745677   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:22.745688   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:22.767868   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:22.767878   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:22.786463   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:22.786475   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:22.792662   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:22.792672   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:22.804405   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:22.804415   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:22.819261   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:22.819271   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:22.831517   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:22.831527   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:22.856366   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:22.856374   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:22.871274   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:22.871285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:22.883081   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:22.883095   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:22.924261   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:22.924275   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:22.940276   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:22.940285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:22.954048   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:22.954057   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:22.966442   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:22.966453   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:25.502220   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:30.502823   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:30.503183   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:30.539341   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:30.539482   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:30.561056   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:30.561161   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:30.575482   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:30.575565   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:30.588489   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:30.588580   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:30.600274   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:30.600343   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:30.611269   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:30.611346   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:30.626559   17919 logs.go:276] 0 containers: []
	W0328 12:10:30.626574   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:30.626639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:30.637690   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:30.637709   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:30.637716   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:30.649331   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:30.649341   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:30.664067   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:30.664083   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:30.701663   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:30.701675   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:30.713100   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:30.713112   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:30.717583   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:30.717590   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:30.729229   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:30.729242   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:30.741398   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:30.741409   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:30.752701   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:30.752711   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:30.773667   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:30.773678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:30.785665   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:30.785678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:30.800035   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:30.800048   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:30.818175   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:30.818186   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:30.832993   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:30.833003   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:30.858290   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:30.858301   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:33.395006   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:38.397346   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:38.397475   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:38.408730   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:38.408807   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:38.419996   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:38.420081   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:38.433473   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:38.433551   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:38.444930   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:38.445005   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:38.455970   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:38.456052   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:38.472014   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:38.472087   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:38.483925   17919 logs.go:276] 0 containers: []
	W0328 12:10:38.483957   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:38.484026   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:38.503273   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:38.503289   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:38.503295   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:38.519051   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:38.519062   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:38.536609   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:38.536621   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:38.573486   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:38.573508   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:38.615100   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:38.615114   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:38.630100   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:38.630111   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:38.642087   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:38.642099   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:38.654477   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:38.654491   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:38.666206   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:38.666216   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:38.670858   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:38.670867   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:38.685238   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:38.685249   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:38.697450   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:38.697462   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:38.711693   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:38.711706   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:38.726058   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:38.726073   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:38.749869   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:38.749879   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:41.263849   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:46.266177   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:46.266450   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:46.293982   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:46.294092   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:46.310881   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:46.310969   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:46.323293   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:46.323368   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:46.334486   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:46.334551   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:46.345259   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:46.345329   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:46.356024   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:46.356101   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:46.369004   17919 logs.go:276] 0 containers: []
	W0328 12:10:46.369016   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:46.369074   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:46.379982   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:46.379998   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:46.380003   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:46.392609   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:46.392622   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:46.414704   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:46.414716   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:46.425606   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:46.425619   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:46.439309   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:46.439320   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:46.463365   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:46.463373   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:46.467744   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:46.467750   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:46.480119   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:46.480132   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:46.492124   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:46.492135   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:46.526581   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:46.526592   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:46.541320   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:46.541332   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:46.552833   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:46.552843   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:46.568228   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:46.568240   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:46.580444   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:46.580457   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:46.599037   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:46.599048   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:49.135880   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:54.138457   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:54.138548   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:54.149576   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:54.149641   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:54.159973   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:54.160032   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:54.171249   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:54.171330   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:54.184169   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:54.184238   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:54.196268   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:54.196339   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:54.206589   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:54.206652   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:54.218270   17919 logs.go:276] 0 containers: []
	W0328 12:10:54.218283   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:54.218347   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:54.229261   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:54.229283   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:54.229288   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:54.243747   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:54.243757   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:54.258051   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:54.258062   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:54.269918   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:54.269928   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:54.287310   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:54.287319   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:54.323390   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:54.323402   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:54.359005   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:54.359019   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:54.374527   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:54.374537   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:54.397759   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:54.397767   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:54.409739   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:54.409750   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:54.421519   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:54.421531   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:54.433073   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:54.433083   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:54.445061   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:54.445072   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:54.456480   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:54.456490   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:54.461317   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:54.461324   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:56.980881   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:01.983218   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:01.983467   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:02.004017   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:02.004124   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:02.021431   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:02.021510   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:02.033699   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:02.033772   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:02.045260   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:02.045333   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:02.056310   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:02.056381   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:02.067029   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:02.067094   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:02.077818   17919 logs.go:276] 0 containers: []
	W0328 12:11:02.077828   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:02.077887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:02.107430   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:02.107451   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:02.107456   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:02.123578   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:02.123595   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:02.158274   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:02.158294   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:02.193279   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:02.193289   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:02.206018   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:02.206030   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:02.218133   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:02.218144   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:02.229798   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:02.229808   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:02.244353   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:02.244364   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:02.255854   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:02.255865   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:02.267149   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:02.267160   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:02.281774   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:02.281783   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:02.299370   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:02.299381   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:02.311173   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:02.311182   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:02.315695   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:02.315703   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:02.333434   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:02.333444   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:04.859703   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:09.862056   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:09.862263   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:09.880729   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:09.880810   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:09.897267   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:09.897349   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:09.908845   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:09.908912   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:09.919831   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:09.919896   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:09.930434   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:09.930508   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:09.940801   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:09.940873   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:09.951069   17919 logs.go:276] 0 containers: []
	W0328 12:11:09.951080   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:09.951133   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:09.961399   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:09.961416   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:09.961421   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:09.975862   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:09.975871   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:09.993890   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:09.993900   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:10.019184   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:10.019197   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:10.030939   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:10.030952   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:10.066208   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:10.066220   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:10.071351   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:10.071358   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:10.083139   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:10.083150   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:10.095071   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:10.095085   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:10.110442   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:10.110455   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:10.122410   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:10.122420   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:10.134191   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:10.134202   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:10.146158   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:10.146168   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:10.180113   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:10.180127   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:10.194520   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:10.194533   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:12.708104   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:17.710481   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:17.710666   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:17.732269   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:17.732366   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:17.747767   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:17.747841   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:17.760862   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:17.760939   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:17.771652   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:17.771718   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:17.781721   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:17.781793   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:17.792641   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:17.792711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:17.802751   17919 logs.go:276] 0 containers: []
	W0328 12:11:17.802761   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:17.802812   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:17.813370   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:17.813387   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:17.813391   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:17.837159   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:17.837168   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:17.849657   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:17.849670   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:17.886859   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:17.886880   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:17.900651   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:17.900676   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:17.913768   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:17.913783   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:17.933268   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:17.933279   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:17.944887   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:17.944898   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:17.962974   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:17.962984   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:17.977543   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:17.977551   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:17.989255   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:17.989267   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:18.003202   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:18.003212   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:18.021352   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:18.021365   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:18.033200   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:18.033211   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:18.037758   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:18.037766   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:20.574840   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:25.577353   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:25.580701   17919 out.go:177] 
	W0328 12:11:25.584518   17919 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0328 12:11:25.584528   17919 out.go:239] * 
	* 
	W0328 12:11:25.585263   17919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:11:25.594579   17919 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-623000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-28 12:11:25.676725 -0700 PDT m=+1410.430148210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-623000 -n running-upgrade-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-623000 -n running-upgrade-623000: exit status 2 (15.781221083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-623000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-641000          | force-systemd-flag-641000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-080000              | force-systemd-env-080000  | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-080000           | force-systemd-env-080000  | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT | 28 Mar 24 12:01 PDT |
	| start   | -p docker-flags-848000                | docker-flags-848000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-641000             | force-systemd-flag-641000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-641000          | force-systemd-flag-641000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT | 28 Mar 24 12:01 PDT |
	| start   | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-848000 ssh               | docker-flags-848000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-848000 ssh               | docker-flags-848000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-848000                | docker-flags-848000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT | 28 Mar 24 12:01 PDT |
	| start   | -p cert-options-243000                | cert-options-243000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-243000 ssh               | cert-options-243000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-243000 -- sudo        | cert-options-243000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-243000                | cert-options-243000       | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:01 PDT | 28 Mar 24 12:01 PDT |
	| start   | -p running-upgrade-623000             | minikube                  | jenkins | v1.26.0        | 28 Mar 24 12:01 PDT | 28 Mar 24 12:02 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-623000             | running-upgrade-623000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:02 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT | 28 Mar 24 12:04 PDT |
	| start   | -p kubernetes-upgrade-850000          | kubernetes-upgrade-850000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-850000          | kubernetes-upgrade-850000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT | 28 Mar 24 12:04 PDT |
	| start   | -p kubernetes-upgrade-850000          | kubernetes-upgrade-850000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0   |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-850000          | kubernetes-upgrade-850000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:04 PDT | 28 Mar 24 12:04 PDT |
	| start   | -p stopped-upgrade-732000             | minikube                  | jenkins | v1.26.0        | 28 Mar 24 12:05 PDT | 28 Mar 24 12:05 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-732000 stop           | minikube                  | jenkins | v1.26.0        | 28 Mar 24 12:05 PDT | 28 Mar 24 12:06 PDT |
	| start   | -p stopped-upgrade-732000             | stopped-upgrade-732000    | jenkins | v1.33.0-beta.0 | 28 Mar 24 12:06 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 12:06:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 12:06:00.665317   18107 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:06:00.665464   18107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:06:00.665468   18107 out.go:304] Setting ErrFile to fd 2...
	I0328 12:06:00.665470   18107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:06:00.665641   18107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:06:00.666798   18107 out.go:298] Setting JSON to false
	I0328 12:06:00.686567   18107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11132,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:06:00.686639   18107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:06:00.691235   18107 out.go:177] * [stopped-upgrade-732000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:06:00.698280   18107 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:06:00.698303   18107 notify.go:220] Checking for updates...
	I0328 12:06:00.705192   18107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:06:00.708255   18107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:06:00.712256   18107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:06:00.715214   18107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:06:00.718279   18107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:06:00.721575   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:06:00.725208   18107 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 12:06:00.728217   18107 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:06:00.732252   18107 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:06:00.739279   18107 start.go:297] selected driver: qemu2
	I0328 12:06:00.739286   18107 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:00.739351   18107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:06:00.742286   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:06:00.742304   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:06:00.742337   18107 start.go:340] cluster config:
	{Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:00.742392   18107 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:06:00.754186   18107 out.go:177] * Starting "stopped-upgrade-732000" primary control-plane node in "stopped-upgrade-732000" cluster
	I0328 12:06:00.758249   18107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:06:00.758266   18107 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0328 12:06:00.758278   18107 cache.go:56] Caching tarball of preloaded images
	I0328 12:06:00.758333   18107 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:06:00.758340   18107 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0328 12:06:00.758394   18107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/config.json ...
	I0328 12:06:00.758940   18107 start.go:360] acquireMachinesLock for stopped-upgrade-732000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:06:00.758976   18107 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "stopped-upgrade-732000"
	I0328 12:06:00.758986   18107 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:06:00.758992   18107 fix.go:54] fixHost starting: 
	I0328 12:06:00.759127   18107 fix.go:112] recreateIfNeeded on stopped-upgrade-732000: state=Stopped err=<nil>
	W0328 12:06:00.759136   18107 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:06:00.766255   18107 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-732000" ...
	I0328 12:06:00.900003   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:00.770235   18107 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53341-:22,hostfwd=tcp::53342-:2376,hostname=stopped-upgrade-732000 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/disk.qcow2
	I0328 12:06:00.820209   18107 main.go:141] libmachine: STDOUT: 
	I0328 12:06:00.820246   18107 main.go:141] libmachine: STDERR: 
	I0328 12:06:00.820252   18107 main.go:141] libmachine: Waiting for VM to start (ssh -p 53341 docker@127.0.0.1)...
	I0328 12:06:05.902387   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:05.902965   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:05.943337   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:05.943465   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:05.963209   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:05.963325   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:05.978093   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:05.978178   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:05.990168   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:05.990237   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:06.000950   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:06.001018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:06.011562   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:06.011639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:06.021452   17919 logs.go:276] 0 containers: []
	W0328 12:06:06.021465   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:06.021537   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:06.032223   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:06.032242   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:06.032254   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:06.046143   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:06.046155   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:06.061432   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:06.061445   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:06.072795   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:06.072807   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:06.090553   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:06.090562   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:06.107898   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:06.107911   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:06.111970   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:06.111978   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:06.125761   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:06.125771   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:06.143087   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:06.143097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:06.158088   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:06.158097   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:06.197569   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:06.197580   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:06.218441   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:06.218454   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:06.230095   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:06.230106   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:06.242218   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:06.242231   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:06.283455   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:06.283468   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:06.297086   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:06.297099   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:08.822233   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:13.824636   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:13.824867   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:13.842441   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:13.842528   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:13.855902   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:13.855979   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:13.868019   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:13.868084   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:13.884942   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:13.885018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:13.895289   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:13.895348   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:13.905933   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:13.906004   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:13.921003   17919 logs.go:276] 0 containers: []
	W0328 12:06:13.921013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:13.921065   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:13.937048   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:13.937068   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:13.937073   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:13.962190   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:13.962205   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:13.977487   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:13.977497   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:13.988735   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:13.988748   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:14.023236   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:14.023247   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:14.048581   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:14.048592   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:14.062327   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:14.062340   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:14.079720   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:14.079731   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:14.091373   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:14.091386   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:14.111674   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:14.111686   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:14.125049   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:14.125057   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:14.129198   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:14.129203   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:14.151627   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:14.151633   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:14.189960   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:14.189966   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:14.203666   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:14.203678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:14.223632   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:14.223641   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:16.741507   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:20.243958   18107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/config.json ...
	I0328 12:06:20.244401   18107 machine.go:94] provisionDockerMachine start ...
	I0328 12:06:20.244496   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.244752   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.244761   18107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 12:06:20.311434   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 12:06:20.311454   18107 buildroot.go:166] provisioning hostname "stopped-upgrade-732000"
	I0328 12:06:20.311536   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.311699   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.311710   18107 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-732000 && echo "stopped-upgrade-732000" | sudo tee /etc/hostname
	I0328 12:06:20.373379   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-732000
	
	I0328 12:06:20.373435   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.373550   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.373560   18107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-732000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-732000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-732000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 12:06:20.427356   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 12:06:20.427369   18107 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17877-15366/.minikube CaCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17877-15366/.minikube}
	I0328 12:06:20.427377   18107 buildroot.go:174] setting up certificates
	I0328 12:06:20.427381   18107 provision.go:84] configureAuth start
	I0328 12:06:20.427386   18107 provision.go:143] copyHostCerts
	I0328 12:06:20.427457   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem, removing ...
	I0328 12:06:20.427465   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem
	I0328 12:06:20.427571   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem (1078 bytes)
	I0328 12:06:20.427760   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem, removing ...
	I0328 12:06:20.427764   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem
	I0328 12:06:20.427814   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem (1123 bytes)
	I0328 12:06:20.427922   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem, removing ...
	I0328 12:06:20.427925   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem
	I0328 12:06:20.427970   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem (1675 bytes)
	I0328 12:06:20.428063   18107 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-732000 san=[127.0.0.1 localhost minikube stopped-upgrade-732000]
	I0328 12:06:20.524868   18107 provision.go:177] copyRemoteCerts
	I0328 12:06:20.524912   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 12:06:20.524922   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:20.553933   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 12:06:20.560654   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 12:06:20.567202   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 12:06:20.574363   18107 provision.go:87] duration metric: took 146.970958ms to configureAuth
	I0328 12:06:20.574372   18107 buildroot.go:189] setting minikube options for container-runtime
	I0328 12:06:20.574482   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:06:20.574514   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.574598   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.574602   18107 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 12:06:20.624666   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 12:06:20.624674   18107 buildroot.go:70] root file system type: tmpfs
	I0328 12:06:20.624723   18107 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 12:06:20.624772   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.624873   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.624905   18107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 12:06:20.680061   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 12:06:20.680110   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.680233   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.680241   18107 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 12:06:21.020265   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 12:06:21.020277   18107 machine.go:97] duration metric: took 775.858583ms to provisionDockerMachine
	I0328 12:06:21.020285   18107 start.go:293] postStartSetup for "stopped-upgrade-732000" (driver="qemu2")
	I0328 12:06:21.020297   18107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 12:06:21.020365   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 12:06:21.020374   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:21.050154   18107 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 12:06:21.051380   18107 info.go:137] Remote host: Buildroot 2021.02.12
	I0328 12:06:21.051387   18107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/addons for local assets ...
	I0328 12:06:21.051457   18107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/files for local assets ...
	I0328 12:06:21.051575   18107 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem -> 157842.pem in /etc/ssl/certs
	I0328 12:06:21.051714   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 12:06:21.054900   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:06:21.062004   18107 start.go:296] duration metric: took 41.708ms for postStartSetup
	I0328 12:06:21.062023   18107 fix.go:56] duration metric: took 20.302793542s for fixHost
	I0328 12:06:21.062060   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:21.062161   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:21.062166   18107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 12:06:21.111753   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711652781.220273837
	
	I0328 12:06:21.111759   18107 fix.go:216] guest clock: 1711652781.220273837
	I0328 12:06:21.111763   18107 fix.go:229] Guest: 2024-03-28 12:06:21.220273837 -0700 PDT Remote: 2024-03-28 12:06:21.062025 -0700 PDT m=+20.430843168 (delta=158.248837ms)
	I0328 12:06:21.111773   18107 fix.go:200] guest clock delta is within tolerance: 158.248837ms
	I0328 12:06:21.111776   18107 start.go:83] releasing machines lock for "stopped-upgrade-732000", held for 20.352556s
	I0328 12:06:21.111833   18107 ssh_runner.go:195] Run: cat /version.json
	I0328 12:06:21.111842   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:21.111836   18107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 12:06:21.111864   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	W0328 12:06:21.112410   18107 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53341: connect: connection refused
	I0328 12:06:21.112430   18107 retry.go:31] will retry after 286.462755ms: dial tcp [::1]:53341: connect: connection refused
	W0328 12:06:21.437591   18107 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0328 12:06:21.437734   18107 ssh_runner.go:195] Run: systemctl --version
	I0328 12:06:21.441155   18107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 12:06:21.443959   18107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 12:06:21.443999   18107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0328 12:06:21.448608   18107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0328 12:06:21.455709   18107 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 12:06:21.455722   18107 start.go:494] detecting cgroup driver to use...
	I0328 12:06:21.455849   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:06:21.465027   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0328 12:06:21.468525   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 12:06:21.471691   18107 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 12:06:21.471725   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 12:06:21.475126   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:06:21.478509   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 12:06:21.481954   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:06:21.484916   18107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 12:06:21.487680   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 12:06:21.491131   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 12:06:21.494575   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 12:06:21.497669   18107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 12:06:21.500190   18107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 12:06:21.502969   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:21.568101   18107 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 12:06:21.576363   18107 start.go:494] detecting cgroup driver to use...
	I0328 12:06:21.576449   18107 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 12:06:21.584491   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:06:21.589994   18107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 12:06:21.596025   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:06:21.600518   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 12:06:21.604924   18107 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 12:06:21.658661   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 12:06:21.664407   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:06:21.670230   18107 ssh_runner.go:195] Run: which cri-dockerd
	I0328 12:06:21.671614   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 12:06:21.674665   18107 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 12:06:21.679815   18107 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 12:06:21.743500   18107 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 12:06:21.810740   18107 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 12:06:21.810804   18107 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 12:06:21.816552   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:21.881766   18107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:06:23.036141   18107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154345708s)
	I0328 12:06:23.036220   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 12:06:23.041066   18107 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0328 12:06:23.048119   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:06:23.053133   18107 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 12:06:23.113534   18107 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 12:06:23.181372   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:23.243269   18107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 12:06:23.249606   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:06:23.253837   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:23.312385   18107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 12:06:23.354097   18107 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 12:06:23.354176   18107 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 12:06:23.356326   18107 start.go:562] Will wait 60s for crictl version
	I0328 12:06:23.356383   18107 ssh_runner.go:195] Run: which crictl
	I0328 12:06:23.357962   18107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 12:06:23.373315   18107 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0328 12:06:23.373400   18107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:06:23.390832   18107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:06:21.743533   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:21.743608   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:21.754998   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:21.755074   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:21.765693   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:21.765763   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:21.777265   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:21.777340   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:21.788490   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:21.788570   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:21.799306   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:21.799381   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:21.813240   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:21.813308   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:21.824798   17919 logs.go:276] 0 containers: []
	W0328 12:06:21.824810   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:21.824870   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:21.835125   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:21.835143   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:21.835149   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:21.840068   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:21.840077   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:21.858363   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:21.858374   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:21.881684   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:21.881700   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:21.896548   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:21.896558   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:21.916233   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:21.916244   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:21.931633   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:21.931646   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:21.943208   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:21.943218   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:21.979625   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:21.979635   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:21.993952   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:21.993962   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:22.010221   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:22.010232   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:22.038646   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:22.038657   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:22.052116   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:22.052128   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:22.092093   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:22.092103   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:22.107213   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:22.107225   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:22.118601   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:22.118611   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:24.639396   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:23.409967   18107 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0328 12:06:23.410095   18107 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0328 12:06:23.411568   18107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 12:06:23.415438   18107 kubeadm.go:877] updating cluster {Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0328 12:06:23.415479   18107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:06:23.415518   18107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:06:23.425951   18107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:06:23.425960   18107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:06:23.426008   18107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:06:23.428974   18107 ssh_runner.go:195] Run: which lz4
	I0328 12:06:23.430218   18107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 12:06:23.431455   18107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 12:06:23.431465   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0328 12:06:24.112783   18107 docker.go:649] duration metric: took 682.588667ms to copy over tarball
	I0328 12:06:24.112843   18107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 12:06:25.284921   18107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.172049416s)
	I0328 12:06:25.284935   18107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 12:06:25.300671   18107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:06:25.303581   18107 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0328 12:06:25.309070   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:25.393766   18107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:06:29.641734   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:29.641942   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:29.663147   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:29.663250   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:29.679025   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:29.679136   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:29.692382   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:29.692439   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:29.705559   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:29.705634   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:29.717909   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:29.717988   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:29.729511   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:29.729580   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:29.741160   17919 logs.go:276] 0 containers: []
	W0328 12:06:29.741172   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:29.741231   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:29.752273   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:29.752293   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:29.752299   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:27.003436   18107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.609614417s)
	I0328 12:06:27.003577   18107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:06:27.018806   18107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:06:27.018816   18107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:06:27.018821   18107 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 12:06:27.024630   18107 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0328 12:06:27.024672   18107 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:27.024733   18107 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:27.024770   18107 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:27.024811   18107 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:27.024811   18107 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:27.024868   18107 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:27.024913   18107 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:27.034610   18107 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:27.034814   18107 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:27.034921   18107 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:27.034944   18107 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:27.034988   18107 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0328 12:06:27.035041   18107 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:27.035168   18107 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:27.035471   18107 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0328 12:06:29.157477   18107 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0328 12:06:29.158054   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.191283   18107 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0328 12:06:29.191335   18107 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.191436   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.210898   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0328 12:06:29.211068   18107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:06:29.213570   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0328 12:06:29.213591   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0328 12:06:29.252302   18107 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:06:29.252322   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0328 12:06:29.284223   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.299671   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0328 12:06:29.299708   18107 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0328 12:06:29.299727   18107 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.299777   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.310344   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0328 12:06:29.318583   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.320683   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.321345   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.329442   18107 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0328 12:06:29.329469   18107 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.329517   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.335185   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.337599   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0328 12:06:29.343359   18107 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0328 12:06:29.343383   18107 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.343426   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.343460   18107 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0328 12:06:29.343470   18107 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.343498   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.349106   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0328 12:06:29.377318   18107 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0328 12:06:29.377339   18107 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.377347   18107 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0328 12:06:29.377358   18107 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0328 12:06:29.377390   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.377391   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0328 12:06:29.377396   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0328 12:06:29.377411   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0328 12:06:29.391958   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0328 12:06:29.391959   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0328 12:06:29.392067   18107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0328 12:06:29.393596   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0328 12:06:29.393608   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0328 12:06:29.401354   18107 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0328 12:06:29.401362   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0328 12:06:29.429855   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0328 12:06:29.653702   18107 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0328 12:06:29.653875   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.672066   18107 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0328 12:06:29.672107   18107 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.672185   18107 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.689373   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 12:06:29.689503   18107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:06:29.691144   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0328 12:06:29.691166   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0328 12:06:29.719560   18107 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:06:29.719572   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0328 12:06:29.965500   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 12:06:29.965532   18107 cache_images.go:92] duration metric: took 2.94666975s to LoadCachedImages
	W0328 12:06:29.965572   18107 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0328 12:06:29.965578   18107 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0328 12:06:29.965637   18107 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-732000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 12:06:29.965708   18107 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 12:06:29.979119   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:06:29.979131   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:06:29.979136   18107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 12:06:29.979144   18107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-732000 NodeName:stopped-upgrade-732000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 12:06:29.979212   18107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-732000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 12:06:29.979271   18107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0328 12:06:29.982994   18107 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 12:06:29.983031   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 12:06:29.986304   18107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0328 12:06:29.991600   18107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 12:06:29.997215   18107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0328 12:06:30.003057   18107 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0328 12:06:30.004604   18107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 12:06:30.008348   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:30.073387   18107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:06:30.078536   18107 certs.go:68] Setting up /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000 for IP: 10.0.2.15
	I0328 12:06:30.078543   18107 certs.go:194] generating shared ca certs ...
	I0328 12:06:30.078551   18107 certs.go:226] acquiring lock for ca certs: {Name:mk77bea021df8758c6a5a63d76349b59be8fba89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.078739   18107 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key
	I0328 12:06:30.079067   18107 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key
	I0328 12:06:30.079076   18107 certs.go:256] generating profile certs ...
	I0328 12:06:30.079300   18107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key
	I0328 12:06:30.079316   18107 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c
	I0328 12:06:30.079326   18107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0328 12:06:30.232719   18107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c ...
	I0328 12:06:30.232735   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c: {Name:mk30d932ae259d9e0dca92c2d8cac201b1e35a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.233001   18107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c ...
	I0328 12:06:30.233008   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c: {Name:mk31e2e238e4451bd2cfc5bb7888ea8123fd1cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.233153   18107 certs.go:381] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt
	I0328 12:06:30.233281   18107 certs.go:385] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key
	I0328 12:06:30.233615   18107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.key
	I0328 12:06:30.233799   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem (1338 bytes)
	W0328 12:06:30.233977   18107 certs.go:480] ignoring /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784_empty.pem, impossibly tiny 0 bytes
	I0328 12:06:30.233983   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 12:06:30.234001   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem (1078 bytes)
	I0328 12:06:30.234018   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem (1123 bytes)
	I0328 12:06:30.234039   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem (1675 bytes)
	I0328 12:06:30.234075   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:06:30.234382   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 12:06:30.241224   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 12:06:30.248116   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 12:06:30.255054   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 12:06:30.261817   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 12:06:30.268221   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 12:06:30.275160   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 12:06:30.282717   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 12:06:30.289917   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 12:06:30.296245   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem --> /usr/share/ca-certificates/15784.pem (1338 bytes)
	I0328 12:06:30.303108   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /usr/share/ca-certificates/157842.pem (1708 bytes)
	I0328 12:06:30.310566   18107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 12:06:30.315890   18107 ssh_runner.go:195] Run: openssl version
	I0328 12:06:30.317809   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/157842.pem && ln -fs /usr/share/ca-certificates/157842.pem /etc/ssl/certs/157842.pem"
	I0328 12:06:30.320686   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.322011   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 18:49 /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.322033   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.323829   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/157842.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 12:06:30.327112   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 12:06:30.330289   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.331691   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 19:02 /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.331708   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.333438   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 12:06:30.336127   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15784.pem && ln -fs /usr/share/ca-certificates/15784.pem /etc/ssl/certs/15784.pem"
	I0328 12:06:30.339239   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.340660   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 18:49 /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.340682   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.342294   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15784.pem /etc/ssl/certs/51391683.0"
	I0328 12:06:30.345399   18107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 12:06:30.346684   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 12:06:30.349278   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 12:06:30.351361   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 12:06:30.353319   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 12:06:30.355093   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 12:06:30.356867   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 12:06:30.358548   18107 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:30.358616   18107 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:06:30.368242   18107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 12:06:30.371492   18107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 12:06:30.371499   18107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 12:06:30.371502   18107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 12:06:30.371541   18107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 12:06:30.374565   18107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:06:30.374962   18107 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-732000" does not appear in /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:06:30.375064   18107 kubeconfig.go:62] /Users/jenkins/minikube-integration/17877-15366/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-732000" cluster setting kubeconfig missing "stopped-upgrade-732000" context setting]
	I0328 12:06:30.375263   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.375691   18107 kapi.go:59] client config for stopped-upgrade-732000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043d2d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:06:30.376119   18107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 12:06:30.378851   18107 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-732000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0328 12:06:30.378855   18107 kubeadm.go:1154] stopping kube-system containers ...
	I0328 12:06:30.378890   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:06:30.389823   18107 docker.go:483] Stopping containers: [b4451a54079a 8610e5a378ef a4c23e1c3563 25f63db07e9f cde1338e3262 e22ff461ac53 63f4fd83f105 c91dd579012c]
	I0328 12:06:30.389885   18107 ssh_runner.go:195] Run: docker stop b4451a54079a 8610e5a378ef a4c23e1c3563 25f63db07e9f cde1338e3262 e22ff461ac53 63f4fd83f105 c91dd579012c
	I0328 12:06:30.400156   18107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 12:06:30.406051   18107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:06:30.408836   18107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:06:30.408841   18107 kubeadm.go:156] found existing configuration files:
	
	I0328 12:06:30.408863   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf
	I0328 12:06:30.411432   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:06:30.411454   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:06:30.414514   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf
	I0328 12:06:30.417207   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:06:30.417230   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:06:30.419651   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf
	I0328 12:06:30.422997   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:06:30.423021   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:06:30.426123   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf
	I0328 12:06:30.428548   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:06:30.428573   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:06:30.431304   18107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:06:30.434435   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:30.459771   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:29.779427   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:29.779441   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:29.794634   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:29.794645   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:29.810447   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:29.810459   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:29.832932   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:29.832942   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:29.844894   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:29.844905   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:29.882370   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:29.882382   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:29.903482   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:29.903494   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:29.915947   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:29.915959   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:29.940324   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:29.940335   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:29.982557   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:29.982567   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:29.995399   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:29.995410   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:30.016529   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:30.016539   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:30.020752   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:30.020759   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:30.031821   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:30.031835   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:30.044396   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:30.044412   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:32.564313   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:31.482598   18107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.022798958s)
	I0328 12:06:31.482612   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.599992   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.625104   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.648475   18107 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:06:31.648548   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.150195   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.650641   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.655461   18107 api_server.go:72] duration metric: took 1.006974833s to wait for apiserver process to appear ...
	I0328 12:06:32.655471   18107 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:06:32.655479   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:37.566635   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:37.566783   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:37.584438   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:37.584516   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:37.596891   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:37.596963   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:37.607464   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:37.607523   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:37.618015   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:37.618075   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:37.632221   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:37.632291   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:37.642556   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:37.642612   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:37.653539   17919 logs.go:276] 0 containers: []
	W0328 12:06:37.653551   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:37.653618   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:37.669082   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:37.669097   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:37.669104   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:37.681502   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:37.681513   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:37.695473   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:37.695486   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:37.707323   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:37.707334   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:37.711972   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:37.711980   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:37.723163   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:37.723175   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:37.739001   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:37.739013   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:37.778440   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:37.778453   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:37.790666   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:37.790677   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:37.805579   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:37.805590   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:37.830977   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:37.830989   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:37.852597   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:37.852607   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:37.876240   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:37.876248   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:37.888018   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:37.888031   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:37.922529   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:37.922538   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:37.937345   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:37.937356   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:37.657658   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:37.657675   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:40.457958   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:42.657995   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:42.658070   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:45.460378   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:45.460636   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:45.484321   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:45.484425   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:45.500026   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:45.500104   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:45.514063   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:45.514131   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:45.525813   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:45.525883   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:45.535958   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:45.536025   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:45.546161   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:45.546226   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:45.555743   17919 logs.go:276] 0 containers: []
	W0328 12:06:45.555756   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:45.555814   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:45.566157   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:45.566173   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:45.566178   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:45.605722   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:45.605736   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:45.623362   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:45.623375   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:45.645810   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:45.645820   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:45.683990   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:45.684000   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:45.698330   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:45.698341   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:45.712243   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:45.712254   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:45.717084   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:45.717093   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:45.730871   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:45.730882   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:45.755838   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:45.755849   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:45.767758   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:45.767777   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:45.783603   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:45.783614   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:45.795242   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:45.795253   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:45.813105   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:45.813117   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:45.824637   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:45.824648   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:45.836388   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:45.836399   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:48.351984   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:47.658761   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:47.658807   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:53.354254   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:53.354466   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:06:53.379720   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:06:53.379841   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:06:53.396539   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:06:53.396643   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:06:53.410066   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:06:53.410136   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:06:53.421247   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:06:53.421321   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:06:53.432058   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:06:53.432134   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:06:53.442862   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:06:53.442932   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:06:53.452678   17919 logs.go:276] 0 containers: []
	W0328 12:06:53.452689   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:06:53.452750   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:06:53.463187   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:06:53.463204   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:06:53.463210   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:06:53.502811   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:06:53.502819   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:06:53.540849   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:06:53.540859   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:06:53.563455   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:06:53.563465   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:06:53.575040   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:06:53.575051   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:06:53.590075   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:06:53.590087   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:06:53.601947   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:06:53.601961   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:06:53.613266   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:06:53.613276   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:06:53.635981   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:06:53.635988   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:06:53.647312   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:06:53.647322   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:06:53.661038   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:06:53.661047   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:06:53.680525   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:06:53.680535   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:06:53.694536   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:06:53.694545   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:06:53.699027   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:06:53.699034   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:06:53.717586   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:06:53.717597   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:06:53.729207   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:06:53.729218   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:06:52.659474   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:52.659614   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:56.248949   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:57.660833   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:57.660898   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:01.251478   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:01.251877   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:01.287512   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:07:01.287628   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:01.305093   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:07:01.305183   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:01.319503   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:07:01.319577   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:01.331793   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:07:01.331859   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:01.342777   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:07:01.342842   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:01.353780   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:07:01.353850   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:01.364831   17919 logs.go:276] 0 containers: []
	W0328 12:07:01.364844   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:01.364903   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:01.375311   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:07:01.375329   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:07:01.375335   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:07:01.387185   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:07:01.387197   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:07:01.402909   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:07:01.402920   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:07:01.414943   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:07:01.414953   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:07:01.428298   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:07:01.428310   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:07:01.448692   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:07:01.448703   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:07:01.462940   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:07:01.462951   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:07:01.474483   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:01.474493   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:01.479157   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:07:01.479164   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:07:01.491027   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:07:01.491039   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:07:01.508946   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:01.508957   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:01.532238   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:01.532246   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:01.572807   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:07:01.572833   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:07:01.588759   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:07:01.588770   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:07:01.607627   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:07:01.607639   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:01.621546   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:01.621557   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:04.159459   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:02.662188   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:02.662241   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:09.161235   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:09.161422   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:09.185520   17919 logs.go:276] 2 containers: [1bc1f83ead26 52e0bfbb6769]
	I0328 12:07:09.185619   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:09.199346   17919 logs.go:276] 2 containers: [ea48e4d1dbff 95ea60112fdb]
	I0328 12:07:09.199424   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:09.210553   17919 logs.go:276] 1 containers: [2d3d5e023474]
	I0328 12:07:09.210620   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:09.221267   17919 logs.go:276] 2 containers: [a0d166f63471 4a2ee84d2f88]
	I0328 12:07:09.221331   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:09.232217   17919 logs.go:276] 1 containers: [418ff1a2fa7a]
	I0328 12:07:09.232302   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:09.242995   17919 logs.go:276] 2 containers: [bd4e4b5c8e07 34fa11726dcc]
	I0328 12:07:09.243066   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:09.253648   17919 logs.go:276] 0 containers: []
	W0328 12:07:09.253660   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:09.253719   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:09.263778   17919 logs.go:276] 1 containers: [915bc00b104e]
	I0328 12:07:09.263796   17919 logs.go:123] Gathering logs for kube-apiserver [1bc1f83ead26] ...
	I0328 12:07:09.263801   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bc1f83ead26"
	I0328 12:07:09.278109   17919 logs.go:123] Gathering logs for etcd [ea48e4d1dbff] ...
	I0328 12:07:09.278121   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48e4d1dbff"
	I0328 12:07:09.292987   17919 logs.go:123] Gathering logs for kube-proxy [418ff1a2fa7a] ...
	I0328 12:07:09.292997   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ff1a2fa7a"
	I0328 12:07:09.304763   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:09.304774   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:09.309273   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:09.309280   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:09.346221   17919 logs.go:123] Gathering logs for coredns [2d3d5e023474] ...
	I0328 12:07:09.346231   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d3d5e023474"
	I0328 12:07:09.358094   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:09.358105   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:09.381103   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:07:09.381110   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:09.392485   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:09.392498   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:09.432674   17919 logs.go:123] Gathering logs for kube-apiserver [52e0bfbb6769] ...
	I0328 12:07:09.432688   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52e0bfbb6769"
	I0328 12:07:09.453182   17919 logs.go:123] Gathering logs for etcd [95ea60112fdb] ...
	I0328 12:07:09.453196   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ea60112fdb"
	I0328 12:07:09.472414   17919 logs.go:123] Gathering logs for kube-scheduler [4a2ee84d2f88] ...
	I0328 12:07:09.472426   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a2ee84d2f88"
	I0328 12:07:09.486920   17919 logs.go:123] Gathering logs for kube-controller-manager [bd4e4b5c8e07] ...
	I0328 12:07:09.486933   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4e4b5c8e07"
	I0328 12:07:09.504432   17919 logs.go:123] Gathering logs for kube-controller-manager [34fa11726dcc] ...
	I0328 12:07:09.504442   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fa11726dcc"
	I0328 12:07:09.517884   17919 logs.go:123] Gathering logs for kube-scheduler [a0d166f63471] ...
	I0328 12:07:09.517894   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0d166f63471"
	I0328 12:07:09.529869   17919 logs.go:123] Gathering logs for storage-provisioner [915bc00b104e] ...
	I0328 12:07:09.529881   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 915bc00b104e"
	I0328 12:07:07.663695   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:07.663807   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:12.044007   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:12.666448   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:12.666493   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:17.046374   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:17.046491   17919 kubeadm.go:591] duration metric: took 4m4.546710208s to restartPrimaryControlPlane
	W0328 12:07:17.046564   17919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 12:07:17.046591   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0328 12:07:18.057713   17919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.011098041s)
	I0328 12:07:18.057774   17919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 12:07:18.062611   17919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:07:18.065644   17919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:07:18.068427   17919 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:07:18.068433   17919 kubeadm.go:156] found existing configuration files:
	
	I0328 12:07:18.068455   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf
	I0328 12:07:18.071169   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:07:18.071192   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:07:18.074602   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf
	I0328 12:07:18.077628   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:07:18.077650   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:07:18.080056   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf
	I0328 12:07:18.082933   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:07:18.082958   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:07:18.085703   17919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf
	I0328 12:07:18.088094   17919 kubeadm.go:162] "https://control-plane.minikube.internal:53167" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53167 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:07:18.088113   17919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:07:18.091034   17919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 12:07:18.108775   17919 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0328 12:07:18.109029   17919 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 12:07:18.164695   17919 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 12:07:18.164761   17919 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 12:07:18.164809   17919 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 12:07:18.214949   17919 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 12:07:18.220041   17919 out.go:204]   - Generating certificates and keys ...
	I0328 12:07:18.220075   17919 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 12:07:18.220107   17919 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 12:07:18.220149   17919 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 12:07:18.220191   17919 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 12:07:18.220227   17919 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 12:07:18.220259   17919 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 12:07:18.220297   17919 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 12:07:18.220334   17919 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 12:07:18.220370   17919 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 12:07:18.220406   17919 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 12:07:18.220424   17919 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 12:07:18.220448   17919 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 12:07:18.436976   17919 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 12:07:18.521940   17919 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 12:07:18.603576   17919 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 12:07:18.762835   17919 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 12:07:18.793821   17919 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 12:07:18.794357   17919 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 12:07:18.794386   17919 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 12:07:18.884729   17919 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 12:07:18.889337   17919 out.go:204]   - Booting up control plane ...
	I0328 12:07:18.889384   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 12:07:18.889415   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 12:07:18.889443   17919 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 12:07:18.889483   17919 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 12:07:18.889574   17919 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 12:07:17.668794   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:17.668815   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:23.388573   17919 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501509 seconds
	I0328 12:07:23.388640   17919 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 12:07:23.393410   17919 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 12:07:23.905423   17919 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 12:07:23.905602   17919 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-623000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 12:07:24.409351   17919 kubeadm.go:309] [bootstrap-token] Using token: laorjf.a9sshcpx4y1fhue1
	I0328 12:07:24.412933   17919 out.go:204]   - Configuring RBAC rules ...
	I0328 12:07:24.412993   17919 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 12:07:24.413056   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 12:07:24.414772   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 12:07:24.417203   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 12:07:24.418073   17919 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 12:07:24.418962   17919 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 12:07:24.421831   17919 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 12:07:24.609290   17919 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 12:07:24.814315   17919 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 12:07:24.814893   17919 kubeadm.go:309] 
	I0328 12:07:24.814928   17919 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 12:07:24.814936   17919 kubeadm.go:309] 
	I0328 12:07:24.814979   17919 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 12:07:24.814982   17919 kubeadm.go:309] 
	I0328 12:07:24.814995   17919 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 12:07:24.815026   17919 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 12:07:24.815056   17919 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 12:07:24.815060   17919 kubeadm.go:309] 
	I0328 12:07:24.815094   17919 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 12:07:24.815099   17919 kubeadm.go:309] 
	I0328 12:07:24.815123   17919 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 12:07:24.815125   17919 kubeadm.go:309] 
	I0328 12:07:24.815150   17919 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 12:07:24.815188   17919 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 12:07:24.815232   17919 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 12:07:24.815236   17919 kubeadm.go:309] 
	I0328 12:07:24.815279   17919 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 12:07:24.815326   17919 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 12:07:24.815329   17919 kubeadm.go:309] 
	I0328 12:07:24.815378   17919 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token laorjf.a9sshcpx4y1fhue1 \
	I0328 12:07:24.815431   17919 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b \
	I0328 12:07:24.815444   17919 kubeadm.go:309] 	--control-plane 
	I0328 12:07:24.815447   17919 kubeadm.go:309] 
	I0328 12:07:24.815494   17919 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 12:07:24.815498   17919 kubeadm.go:309] 
	I0328 12:07:24.815538   17919 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token laorjf.a9sshcpx4y1fhue1 \
	I0328 12:07:24.815609   17919 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b 
	I0328 12:07:24.815671   17919 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 12:07:24.815678   17919 cni.go:84] Creating CNI manager for ""
	I0328 12:07:24.815685   17919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:07:24.818556   17919 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 12:07:24.824406   17919 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 12:07:24.827550   17919 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 12:07:24.832718   17919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 12:07:24.832775   17919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 12:07:24.832784   17919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-623000 minikube.k8s.io/updated_at=2024_03_28T12_07_24_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967 minikube.k8s.io/name=running-upgrade-623000 minikube.k8s.io/primary=true
	I0328 12:07:24.881844   17919 ops.go:34] apiserver oom_adj: -16
	I0328 12:07:24.881855   17919 kubeadm.go:1107] duration metric: took 49.113167ms to wait for elevateKubeSystemPrivileges
	W0328 12:07:24.881895   17919 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 12:07:24.881901   17919 kubeadm.go:393] duration metric: took 4m12.395622541s to StartCluster
	I0328 12:07:24.881910   17919 settings.go:142] acquiring lock: {Name:mkfc1d043149af7cff65561e827dba55cefba229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:07:24.882085   17919 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:07:24.882551   17919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:07:24.882728   17919 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:07:24.886472   17919 out.go:177] * Verifying Kubernetes components...
	I0328 12:07:24.882787   17919 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 12:07:24.882937   17919 config.go:182] Loaded profile config "running-upgrade-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:07:24.894275   17919 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-623000"
	I0328 12:07:24.894287   17919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:07:24.894277   17919 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-623000"
	I0328 12:07:24.894297   17919 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-623000"
	W0328 12:07:24.894301   17919 addons.go:243] addon storage-provisioner should already be in state true
	I0328 12:07:24.894304   17919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-623000"
	I0328 12:07:24.894344   17919 host.go:66] Checking if "running-upgrade-623000" exists ...
	I0328 12:07:24.895593   17919 kapi.go:59] client config for running-upgrade-623000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/running-upgrade-623000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101caed60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:07:24.895794   17919 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-623000"
	W0328 12:07:24.895800   17919 addons.go:243] addon default-storageclass should already be in state true
	I0328 12:07:24.895807   17919 host.go:66] Checking if "running-upgrade-623000" exists ...
	I0328 12:07:24.899453   17919 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:07:22.671133   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:22.671218   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:24.903429   17919 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:07:24.903434   17919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 12:07:24.903440   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:07:24.904147   17919 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 12:07:24.904152   17919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 12:07:24.904156   17919 sshutil.go:53] new ssh client: &{IP:localhost Port:53135 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/running-upgrade-623000/id_rsa Username:docker}
	I0328 12:07:24.988781   17919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:07:24.993471   17919 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:07:24.993515   17919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:07:24.998011   17919 api_server.go:72] duration metric: took 115.271708ms to wait for apiserver process to appear ...
	I0328 12:07:24.998020   17919 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:07:24.998026   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:25.023114   17919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 12:07:25.024427   17919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:07:27.672611   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:27.672672   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:30.000285   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:30.000360   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:32.676587   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:32.676977   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:32.707005   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:32.707133   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:32.724954   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:32.725050   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:32.738370   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:32.738439   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:32.749947   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:32.750015   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:32.760483   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:32.760560   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:32.771157   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:32.771223   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:32.785872   18107 logs.go:276] 0 containers: []
	W0328 12:07:32.785888   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:32.785950   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:32.796459   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:32.796484   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:32.796489   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:32.807026   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:32.807036   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:32.832096   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:32.832103   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:32.938604   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:32.938617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:32.952769   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:32.952778   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:32.968025   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:32.968035   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:32.990137   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:32.990147   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:33.007317   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:33.007326   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:33.019122   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:33.019134   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:33.057154   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:33.057161   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:33.074717   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:33.074730   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:33.086573   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:33.086584   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:33.101358   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:33.101370   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:33.114813   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:33.114824   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:33.119080   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:33.119093   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:33.130700   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:33.130709   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:35.658264   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:35.001573   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:35.001621   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:40.658575   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:40.658791   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:40.002238   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:40.002262   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:40.671888   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:40.671973   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:40.682953   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:40.683027   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:40.695053   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:40.695128   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:40.705413   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:40.705479   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:40.716124   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:40.716202   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:40.726876   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:40.726949   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:40.737059   18107 logs.go:276] 0 containers: []
	W0328 12:07:40.737068   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:40.737131   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:40.747554   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:40.747569   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:40.747574   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:40.766308   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:40.766321   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:40.777627   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:40.777637   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:40.818914   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:40.818925   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:40.830770   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:40.830783   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:40.847289   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:40.847299   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:40.871295   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:40.871305   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:40.890329   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:40.890340   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:40.904371   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:40.904382   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:40.917228   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:40.917240   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:40.936168   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:40.936188   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:40.978096   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:40.978110   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:40.982230   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:40.982237   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:40.996080   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:40.996090   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:41.010202   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:41.010215   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:41.024684   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:41.024697   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:43.552925   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:45.003011   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:45.003034   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:48.555757   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:48.556127   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:48.601840   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:48.601994   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:48.624628   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:48.624716   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:48.637401   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:48.637475   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:48.649032   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:48.649099   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:48.659627   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:48.659695   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:48.670134   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:48.670195   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:48.680882   18107 logs.go:276] 0 containers: []
	W0328 12:07:48.680894   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:48.680955   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:48.691326   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:48.691342   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:48.691348   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:48.702637   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:48.702647   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:48.713680   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:48.713692   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:48.728729   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:48.728740   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:48.733205   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:48.733214   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:48.756509   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:48.756520   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:48.794097   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:48.794118   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:48.819672   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:48.819683   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:48.834302   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:48.834316   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:48.845762   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:48.845773   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:48.864329   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:48.864341   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:48.882229   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:48.882238   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:48.895686   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:48.895699   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:48.910009   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:48.910019   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:48.926699   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:48.926711   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:48.938364   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:48.938374   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:50.003963   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:50.004021   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:55.005279   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:55.005309   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0328 12:07:55.371440   17919 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0328 12:07:55.376617   17919 out.go:177] * Enabled addons: storage-provisioner
	I0328 12:07:51.476772   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:55.384656   17919 addons.go:505] duration metric: took 30.501517542s for enable addons: enabled=[storage-provisioner]
	I0328 12:07:56.479122   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:56.479247   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:56.493458   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:56.493532   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:56.504402   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:56.504466   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:56.515210   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:56.515288   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:56.525429   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:56.525502   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:56.535633   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:56.535719   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:56.554090   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:56.554156   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:56.567303   18107 logs.go:276] 0 containers: []
	W0328 12:07:56.567313   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:56.567371   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:56.577974   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:56.577990   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:56.577996   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:56.595037   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:56.595050   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:56.606782   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:56.606792   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:56.642602   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:56.642609   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:56.646514   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:56.646522   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:56.660867   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:56.660877   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:56.672769   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:56.672782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:56.687418   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:56.687432   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:56.698575   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:56.698587   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:56.710235   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:56.710247   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:56.744752   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:56.744766   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:56.762793   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:56.762803   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:56.774331   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:56.774341   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:56.797424   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:56.797430   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:56.811772   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:56.811782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:56.836955   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:56.836968   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:59.357296   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:00.006877   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:00.006952   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:04.359769   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:04.360009   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:04.388003   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:04.388124   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:04.405130   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:04.405220   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:04.418536   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:04.418612   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:04.429673   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:04.429743   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:04.439946   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:04.440015   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:04.450395   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:04.450464   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:04.465884   18107 logs.go:276] 0 containers: []
	W0328 12:08:04.465898   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:04.465958   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:04.476448   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:04.476466   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:04.476472   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:04.490547   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:04.490558   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:04.501963   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:04.501973   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:04.519470   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:04.519483   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:04.532464   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:04.532474   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:04.567111   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:04.567123   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:04.590654   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:04.590662   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:04.606493   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:04.606504   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:04.644777   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:04.644788   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:04.649329   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:04.649336   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:04.664558   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:04.664572   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:04.676046   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:04.676060   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:04.691296   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:04.691307   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:04.716033   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:04.716045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:04.730867   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:04.730878   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:04.742883   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:04.742894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:05.009318   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:05.009337   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:07.256509   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:10.011550   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:10.011581   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:12.258926   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:12.259043   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:12.270068   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:12.270148   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:12.281417   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:12.281489   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:12.291870   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:12.291933   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:12.302599   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:12.302666   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:12.312578   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:12.312654   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:12.323082   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:12.323152   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:12.333564   18107 logs.go:276] 0 containers: []
	W0328 12:08:12.333575   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:12.333635   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:12.344306   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:12.344323   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:12.344329   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:12.379426   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:12.379440   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:12.394153   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:12.394162   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:12.405638   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:12.405652   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:12.429894   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:12.429901   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:12.448758   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:12.448771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:12.466761   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:12.466771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:12.484085   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:12.484097   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:12.495166   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:12.495177   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:12.532877   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:12.532888   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:12.537199   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:12.537205   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:12.548333   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:12.548344   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:12.562703   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:12.562714   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:12.584609   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:12.584620   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:12.609787   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:12.609798   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:12.621308   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:12.621317   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:15.135324   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:15.013892   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:15.013931   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:20.137641   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:20.137817   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:20.154639   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:20.154719   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:20.167604   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:20.167676   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:20.178204   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:20.178277   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:20.188409   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:20.188474   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:20.199407   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:20.199477   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:20.213685   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:20.213757   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:20.223473   18107 logs.go:276] 0 containers: []
	W0328 12:08:20.223484   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:20.223538   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:20.238302   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:20.238318   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:20.238323   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:20.242346   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:20.242359   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:20.266879   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:20.266890   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:20.278035   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:20.278045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:20.293665   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:20.293677   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:20.308013   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:20.308023   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:20.325827   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:20.325837   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:20.363955   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:20.363964   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:20.398561   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:20.398573   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:20.412238   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:20.412253   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:20.426804   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:20.426813   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:20.439689   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:20.439699   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:20.453397   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:20.453413   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:20.468320   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:20.468330   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:20.479684   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:20.479694   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:20.493125   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:20.493134   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:20.015775   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:20.015808   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:23.018336   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:25.018085   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:25.018216   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:25.030070   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:25.030154   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:25.040184   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:25.040250   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:25.059749   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:25.059825   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:25.088333   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:25.088417   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:25.108443   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:25.108521   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:25.118643   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:25.118711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:25.128319   17919 logs.go:276] 0 containers: []
	W0328 12:08:25.128332   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:25.128390   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:25.138513   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:25.138529   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:25.138535   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:25.174427   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:25.174439   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:25.188441   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:25.188454   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:25.202200   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:25.202209   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:25.214270   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:25.214283   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:25.225542   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:25.225552   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:25.245775   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:25.245788   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:25.268625   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:25.268637   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:25.272990   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:25.272995   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:25.306511   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:25.306525   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:25.321881   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:25.321892   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:25.337917   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:25.337927   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:25.349808   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:25.349822   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:27.861908   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:28.020663   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:28.020855   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:28.045507   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:28.045635   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:28.061144   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:28.061237   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:28.074084   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:28.074157   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:28.085318   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:28.085391   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:28.095690   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:28.095757   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:28.106274   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:28.106343   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:28.116512   18107 logs.go:276] 0 containers: []
	W0328 12:08:28.116527   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:28.116589   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:28.127100   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:28.127117   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:28.127122   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:28.145698   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:28.145710   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:28.167499   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:28.167510   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:28.202631   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:28.202642   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:28.215489   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:28.215500   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:28.238190   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:28.238201   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:28.249368   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:28.249377   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:28.272557   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:28.272565   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:28.284158   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:28.284169   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:28.288840   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:28.288847   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:28.302894   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:28.302905   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:28.327717   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:28.327728   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:28.339134   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:28.339143   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:28.357271   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:28.357281   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:28.370844   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:28.370854   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:28.409145   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:28.409158   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:32.864307   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:32.864438   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:32.876212   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:32.876286   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:32.887528   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:32.887606   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:32.898539   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:32.898608   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:32.909162   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:32.909231   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:32.922515   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:32.922587   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:32.932993   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:32.933062   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:32.943189   17919 logs.go:276] 0 containers: []
	W0328 12:08:32.943203   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:32.943255   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:32.953389   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:32.953408   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:32.953414   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:32.957737   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:32.957747   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:32.972187   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:32.972198   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:32.984195   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:32.984206   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:32.995796   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:32.995807   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:33.010171   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:33.010186   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:33.034561   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:33.034570   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:33.046935   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:33.046945   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:33.083412   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:33.083421   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:33.118123   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:33.118137   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:33.131938   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:33.131949   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:33.144085   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:33.144097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:33.162369   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:33.162382   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:30.929275   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:35.677225   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:35.929976   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:35.930184   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:35.955168   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:35.955275   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:35.972398   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:35.972479   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:35.986653   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:35.986729   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:35.998488   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:35.998559   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:36.009275   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:36.009345   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:36.020683   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:36.020750   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:36.030895   18107 logs.go:276] 0 containers: []
	W0328 12:08:36.030907   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:36.030967   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:36.041175   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:36.041193   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:36.041198   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:36.055096   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:36.055110   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:36.086681   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:36.086692   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:36.104135   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:36.104146   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:36.117454   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:36.117464   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:36.130006   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:36.130017   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:36.164851   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:36.164863   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:36.179573   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:36.179585   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:36.197854   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:36.197864   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:36.212692   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:36.212702   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:36.224287   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:36.224300   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:36.236127   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:36.236137   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:36.247540   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:36.247554   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:36.270584   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:36.270592   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:36.307982   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:36.307989   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:36.312076   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:36.312082   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:38.827505   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:40.678132   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:40.678462   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:40.708678   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:40.708803   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:40.726383   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:40.726475   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:40.739751   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:40.739829   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:40.752086   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:40.752154   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:40.762601   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:40.762679   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:40.772892   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:40.772964   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:40.782734   17919 logs.go:276] 0 containers: []
	W0328 12:08:40.782744   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:40.782799   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:40.793778   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:40.793794   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:40.793800   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:40.812702   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:40.812713   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:40.824339   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:40.824352   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:40.860520   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:40.860529   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:40.864996   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:40.865003   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:40.903409   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:40.903422   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:40.918034   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:40.918044   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:40.929946   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:40.929957   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:40.954009   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:40.954017   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:40.968485   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:40.968495   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:40.980003   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:40.980016   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:40.992314   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:40.992324   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:41.007176   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:41.007187   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:43.523077   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:43.830059   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:43.830286   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:43.860047   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:43.860148   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:43.880723   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:43.880797   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:43.894127   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:43.894198   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:43.904114   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:43.904182   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:43.914833   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:43.914907   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:43.925756   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:43.925823   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:43.936434   18107 logs.go:276] 0 containers: []
	W0328 12:08:43.936446   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:43.936497   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:43.946581   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:43.946598   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:43.946604   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:43.957650   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:43.957662   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:43.969833   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:43.969842   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:43.985238   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:43.985249   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:43.999233   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:43.999244   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:44.003471   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:44.003477   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:44.028665   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:44.028676   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:44.042769   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:44.042782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:44.054669   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:44.054681   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:44.092031   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:44.092041   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:44.106162   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:44.106172   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:44.129887   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:44.129897   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:44.141853   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:44.141864   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:44.176884   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:44.176896   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:44.190996   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:44.191007   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:44.208700   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:44.208712   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:48.525485   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:48.525686   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:48.545069   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:48.545164   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:48.559335   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:48.559407   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:48.573361   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:48.573429   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:48.584370   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:48.584446   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:48.594844   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:48.594905   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:48.604694   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:48.604752   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:48.617002   17919 logs.go:276] 0 containers: []
	W0328 12:08:48.617013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:48.617071   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:48.627566   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:48.627580   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:48.627586   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:48.664486   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:48.664501   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:48.669295   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:48.669305   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:48.706120   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:48.706131   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:48.720670   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:48.720680   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:48.733352   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:48.733366   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:48.744625   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:48.744640   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:48.759350   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:48.759362   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:48.771139   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:48.771149   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:48.788503   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:48.788513   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:48.812842   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:48.812851   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:48.824906   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:48.824917   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:48.839597   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:48.839606   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:46.727222   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:51.352977   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:51.728740   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:51.729117   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:51.766371   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:51.766500   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:51.791082   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:51.791181   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:51.805924   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:51.805995   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:51.818765   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:51.818834   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:51.830750   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:51.830820   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:51.847498   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:51.847564   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:51.857599   18107 logs.go:276] 0 containers: []
	W0328 12:08:51.857610   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:51.857670   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:51.867773   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:51.867790   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:51.867796   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:51.883060   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:51.883072   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:51.902035   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:51.902045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:51.914042   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:51.914057   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:51.938709   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:51.938717   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:51.968544   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:51.968554   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:51.980609   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:51.980619   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:51.992541   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:51.992552   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:51.996609   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:51.996617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:52.021734   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:52.021744   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:52.033440   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:52.033450   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:52.050486   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:52.050499   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:52.086711   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:52.086721   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:52.101206   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:52.101219   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:52.114955   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:52.114968   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:52.151899   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:52.151910   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:54.669190   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:56.355241   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:56.355366   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:56.366185   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:08:56.366265   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:56.376661   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:08:56.376731   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:56.387116   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:08:56.387191   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:56.397124   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:08:56.397189   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:56.407613   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:08:56.407691   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:56.418345   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:08:56.418424   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:56.428388   17919 logs.go:276] 0 containers: []
	W0328 12:08:56.428399   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:56.428457   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:56.438831   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:08:56.438845   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:08:56.438851   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:08:56.450634   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:08:56.450647   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:56.463173   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:08:56.463184   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:08:56.478965   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:08:56.478976   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:08:56.493135   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:08:56.493145   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:08:56.504973   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:08:56.504983   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:08:56.516825   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:08:56.516837   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:08:56.536687   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:56.536701   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:56.561222   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:56.561234   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:56.596844   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:56.596853   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:56.601031   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:56.601037   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:56.634374   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:08:56.634388   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:08:56.650031   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:08:56.650042   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:08:59.169239   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:59.669864   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:59.670079   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:59.697542   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:59.697665   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:59.720990   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:59.721070   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:59.733957   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:59.734017   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:59.745447   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:59.745518   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:59.756150   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:59.756224   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:59.767167   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:59.767235   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:59.777380   18107 logs.go:276] 0 containers: []
	W0328 12:08:59.777391   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:59.777445   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:59.789182   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:59.789201   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:59.789206   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:59.800834   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:59.800844   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:59.818750   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:59.818761   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:59.836933   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:59.836943   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:59.849011   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:59.849022   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:59.864239   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:59.864251   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:59.880564   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:59.880575   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:59.917551   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:59.917559   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:59.928983   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:59.928993   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:59.943518   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:59.943527   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:59.948979   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:59.948987   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:59.973815   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:59.973828   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:59.992421   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:59.992431   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:00.009676   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:00.009686   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:00.033980   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:00.033990   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:00.070114   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:00.070128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:04.170387   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:04.170558   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:04.183282   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:04.183354   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:04.194354   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:04.194426   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:04.205026   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:04.205101   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:04.215459   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:04.215526   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:04.231055   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:04.231127   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:04.242065   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:04.242129   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:04.252473   17919 logs.go:276] 0 containers: []
	W0328 12:09:04.252483   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:04.252537   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:04.263376   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:04.263392   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:04.263398   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:04.297016   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:04.297023   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:04.308572   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:04.308587   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:04.321573   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:04.321585   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:04.346199   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:04.346207   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:04.365284   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:04.365295   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:04.382296   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:04.382306   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:04.403270   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:04.403284   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:04.408329   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:04.408335   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:04.442134   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:04.442144   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:04.456456   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:04.456468   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:04.469926   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:04.469936   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:04.481228   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:04.481239   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:02.586032   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:06.994364   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:07.588397   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:07.588611   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:07.614538   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:07.614628   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:07.628087   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:07.628157   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:07.642627   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:07.642699   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:07.653122   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:07.653195   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:07.664007   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:07.664075   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:07.674314   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:07.674386   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:07.690814   18107 logs.go:276] 0 containers: []
	W0328 12:09:07.690884   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:07.690949   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:07.701662   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:07.701682   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:07.701687   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:07.739834   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:07.739843   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:07.744152   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:07.744160   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:07.758551   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:07.758566   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:07.784125   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:07.784135   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:07.799456   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:07.799466   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:07.814509   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:07.814519   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:07.837871   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:07.837878   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:07.849223   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:07.849234   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:07.883674   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:07.883686   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:07.897523   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:07.897532   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:07.912387   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:07.912397   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:07.929606   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:07.929617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:07.942464   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:07.942474   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:07.953855   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:07.953866   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:07.968284   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:07.968296   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:10.482813   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:11.996746   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:11.996944   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:12.022733   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:12.022853   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:12.037065   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:12.037145   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:12.048968   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:12.049043   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:12.059872   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:12.059936   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:12.070696   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:12.070769   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:12.081755   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:12.081822   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:12.097716   17919 logs.go:276] 0 containers: []
	W0328 12:09:12.097732   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:12.097787   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:12.107942   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:12.107957   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:12.107962   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:12.125640   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:12.125650   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:12.160252   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:12.160266   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:12.195412   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:12.195424   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:12.209663   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:12.209676   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:12.221972   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:12.221984   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:12.240994   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:12.241003   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:12.254781   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:12.254790   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:12.261915   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:12.261925   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:12.277717   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:12.277730   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:12.297130   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:12.297143   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:12.308522   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:12.308532   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:12.332635   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:12.332643   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:15.484240   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:15.484415   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:15.502376   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:15.502451   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:15.516267   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:15.516343   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:15.528924   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:15.528991   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:15.539465   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:15.539531   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:15.550073   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:15.550137   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:15.560487   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:15.560552   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:15.570583   18107 logs.go:276] 0 containers: []
	W0328 12:09:15.570594   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:15.570645   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:15.581115   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:15.581142   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:15.581150   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:15.585153   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:15.585161   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:15.599076   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:15.599087   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:15.610685   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:15.610697   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:15.628527   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:15.628537   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:15.662954   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:15.662965   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:14.846169   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:15.688387   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:15.688397   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:15.713473   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:15.713483   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:15.732374   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:15.732385   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:15.744275   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:15.744286   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:15.768329   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:15.768339   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:15.779343   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:15.779354   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:15.796829   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:15.796838   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:15.833142   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:15.833150   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:15.846718   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:15.846728   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:15.858710   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:15.858721   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:18.373071   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:19.848546   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:19.848661   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:19.860423   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:19.860494   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:19.871318   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:19.871383   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:19.881567   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:19.881639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:19.894087   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:19.894149   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:19.904445   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:19.904520   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:19.914887   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:19.914960   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:19.929149   17919 logs.go:276] 0 containers: []
	W0328 12:09:19.929159   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:19.929219   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:19.944773   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:19.944790   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:19.944798   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:19.980459   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:19.980471   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:19.995341   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:19.995352   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:20.007210   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:20.007221   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:20.027181   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:20.027191   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:20.045344   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:20.045353   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:20.068948   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:20.068958   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:20.103321   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:20.103331   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:20.108589   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:20.108599   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:20.123481   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:20.123491   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:20.136292   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:20.136302   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:20.152064   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:20.152079   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:20.163825   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:20.163837   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:22.677345   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:23.375532   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:23.375764   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:23.408335   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:23.408440   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:23.434229   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:23.434308   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:23.450485   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:23.450550   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:23.465340   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:23.465413   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:23.476246   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:23.476315   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:23.487255   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:23.487321   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:23.497150   18107 logs.go:276] 0 containers: []
	W0328 12:09:23.497162   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:23.497219   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:23.507678   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:23.507695   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:23.507701   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:23.543000   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:23.543013   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:23.567453   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:23.567466   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:23.581866   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:23.581876   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:23.593467   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:23.593478   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:23.606889   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:23.606899   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:23.620790   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:23.620800   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:23.642451   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:23.642460   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:23.654634   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:23.654645   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:23.677297   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:23.677305   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:23.713490   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:23.713500   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:23.717456   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:23.717464   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:23.737818   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:23.737830   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:23.749176   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:23.749188   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:23.762534   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:23.762548   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:23.773645   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:23.773658   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:27.679750   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:27.679933   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:27.692925   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:27.692999   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:27.703916   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:27.703986   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:27.714440   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:27.714506   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:27.730303   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:27.730373   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:27.740833   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:27.740903   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:27.751309   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:27.751377   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:27.761404   17919 logs.go:276] 0 containers: []
	W0328 12:09:27.761418   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:27.761483   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:27.771627   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:27.771644   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:27.771650   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:27.782807   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:27.782817   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:27.798276   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:27.798285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:27.819163   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:27.819173   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:27.830953   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:27.830962   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:27.866056   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:27.866068   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:27.870512   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:27.870520   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:27.904697   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:27.904710   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:27.919662   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:27.919671   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:27.931759   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:27.931771   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:27.946281   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:27.946292   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:27.957603   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:27.957615   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:27.968957   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:27.968968   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:26.289669   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:30.494558   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:31.292092   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:31.292221   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:31.304576   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:31.304656   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:31.322353   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:31.322427   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:31.332700   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:31.332775   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:31.342935   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:31.343010   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:31.362088   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:31.362164   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:31.374304   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:31.374375   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:31.384042   18107 logs.go:276] 0 containers: []
	W0328 12:09:31.384054   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:31.384117   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:31.394336   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:31.394352   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:31.394358   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:31.408244   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:31.408255   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:31.420106   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:31.420116   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:31.437885   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:31.437894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:31.451291   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:31.451301   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:31.468191   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:31.468202   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:31.505164   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:31.505176   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:31.539249   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:31.539260   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:31.558416   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:31.558429   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:31.570972   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:31.570984   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:31.588556   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:31.588570   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:31.592993   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:31.592999   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:31.608394   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:31.608408   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:31.634795   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:31.634809   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:31.649096   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:31.649106   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:31.660562   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:31.660572   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:34.186045   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:35.495499   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:35.495711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:35.516593   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:35.516700   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:35.530910   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:35.530985   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:35.542828   17919 logs.go:276] 2 containers: [9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:35.542887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:35.553193   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:35.553268   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:35.563878   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:35.563939   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:35.574976   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:35.575042   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:35.585307   17919 logs.go:276] 0 containers: []
	W0328 12:09:35.585319   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:35.585378   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:35.595747   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:35.595761   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:35.595766   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:35.606979   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:35.606990   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:35.625448   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:35.625457   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:35.637096   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:35.637106   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:35.672745   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:35.672753   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:35.677157   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:35.677164   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:35.691477   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:35.691487   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:35.705830   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:35.705844   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:35.717272   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:35.717283   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:35.741954   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:35.741961   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:35.782054   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:35.782067   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:35.794480   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:35.794490   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:35.812233   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:35.812243   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:38.324115   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:39.188596   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:39.188915   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:39.222986   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:39.223118   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:39.242598   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:39.242688   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:39.257064   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:39.257143   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:39.269152   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:39.269219   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:39.280181   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:39.280241   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:39.291157   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:39.291227   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:39.301668   18107 logs.go:276] 0 containers: []
	W0328 12:09:39.301679   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:39.301737   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:39.312039   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:39.312054   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:39.312059   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:39.325908   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:39.325919   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:39.351508   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:39.351520   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:39.365200   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:39.365210   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:39.388277   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:39.388286   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:39.416747   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:39.416760   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:39.429987   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:39.429997   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:39.466336   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:39.466345   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:39.470077   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:39.470083   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:39.483592   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:39.483606   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:39.500309   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:39.500320   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:39.516219   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:39.516232   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:39.550972   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:39.550983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:39.571361   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:39.571372   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:39.586613   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:39.586631   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:39.601758   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:39.601769   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:43.326513   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:43.326607   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:43.337789   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:43.337859   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:43.348261   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:43.348338   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:43.358681   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:43.358754   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:43.368856   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:43.368922   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:43.379502   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:43.379577   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:43.397921   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:43.397994   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:43.408043   17919 logs.go:276] 0 containers: []
	W0328 12:09:43.408055   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:43.408108   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:43.422481   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:43.422502   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:43.422507   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:43.440795   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:43.440805   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:43.452564   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:43.452575   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:43.466549   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:43.466558   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:43.478247   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:43.478258   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:43.492843   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:43.492855   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:43.504616   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:43.504625   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:43.516962   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:43.516971   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:43.528988   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:43.528997   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:43.546542   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:43.546553   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:43.557957   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:43.557969   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:43.595604   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:43.595615   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:43.600182   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:43.600191   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:43.611223   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:43.611235   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:43.636512   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:43.636520   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:42.115864   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:46.172568   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:47.117311   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:47.117463   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:47.128951   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:47.129029   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:47.139843   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:47.139916   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:47.153349   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:47.153418   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:47.164138   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:47.164211   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:47.180491   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:47.180560   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:47.191693   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:47.191759   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:47.202072   18107 logs.go:276] 0 containers: []
	W0328 12:09:47.202085   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:47.202145   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:47.213416   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:47.213432   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:47.213438   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:47.218211   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:47.218222   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:47.251235   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:47.251247   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:47.266137   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:47.266148   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:47.284674   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:47.284685   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:47.308353   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:47.308361   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:47.320703   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:47.320714   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:47.346186   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:47.346195   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:47.359648   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:47.359659   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:47.373514   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:47.373526   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:47.385486   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:47.385496   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:47.399614   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:47.399626   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:47.411444   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:47.411453   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:47.447808   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:47.447819   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:47.461929   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:47.461942   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:47.473153   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:47.473164   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:49.986881   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:51.175016   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:51.175245   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:51.191410   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:51.191493   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:51.203745   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:51.203816   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:51.214762   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:51.214836   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:51.225736   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:51.225815   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:51.236718   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:51.236788   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:51.250496   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:51.250564   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:51.260560   17919 logs.go:276] 0 containers: []
	W0328 12:09:51.260572   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:51.260637   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:51.271026   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:51.271042   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:51.271048   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:51.306657   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:51.306667   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:51.325021   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:51.325030   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:51.338668   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:51.338678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:51.350548   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:51.350560   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:51.362752   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:51.362763   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:51.380130   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:51.380141   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:51.398224   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:51.398234   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:51.402528   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:51.402537   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:51.413963   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:51.413971   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:51.425028   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:51.425041   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:51.436151   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:51.436162   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:51.455020   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:51.455031   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:51.490196   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:51.490204   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:51.514277   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:51.514284   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:54.028034   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:54.989256   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:54.989446   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:55.001668   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:55.001746   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:55.012531   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:55.012602   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:55.023575   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:55.023650   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:55.038465   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:55.038542   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:55.049108   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:55.049177   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:55.059794   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:55.059865   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:55.070297   18107 logs.go:276] 0 containers: []
	W0328 12:09:55.070312   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:55.070373   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:55.081397   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:55.081414   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:55.081420   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:55.117941   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:55.117950   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:55.134838   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:55.134849   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:55.146023   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:55.146033   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:55.159662   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:55.159675   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:55.199503   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:55.199514   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:55.224013   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:55.224025   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:55.237863   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:55.237873   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:55.249456   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:55.249466   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:55.253772   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:55.253778   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:55.267252   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:55.267262   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:55.290636   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:55.290643   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:55.307731   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:55.307746   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:55.330514   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:55.330525   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:55.356809   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:55.356819   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:55.369318   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:55.369332   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:59.030632   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:59.030791   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:59.045943   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:09:59.046026   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:59.058912   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:09:59.058986   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:59.070423   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:09:59.070491   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:59.081276   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:09:59.081342   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:59.091524   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:09:59.091596   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:59.101611   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:09:59.101677   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:59.112002   17919 logs.go:276] 0 containers: []
	W0328 12:09:59.112013   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:59.112070   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:59.123088   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:09:59.123105   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:09:59.123110   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:09:59.137754   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:09:59.137763   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:09:59.149626   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:59.149637   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:59.175654   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:59.175663   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:59.210244   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:59.210252   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:59.245818   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:09:59.245829   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:09:59.257578   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:09:59.257589   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:59.268986   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:09:59.268995   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:09:59.282739   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:09:59.282749   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:09:59.298307   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:09:59.298318   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:09:59.310050   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:59.310061   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:59.314392   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:09:59.314400   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:09:59.328830   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:09:59.328839   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:09:59.350088   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:09:59.350098   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:09:59.366074   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:09:59.366085   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:09:57.888813   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:01.881214   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:02.891193   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:02.891336   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:02.904268   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:02.904333   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:02.915174   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:02.915237   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:02.925777   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:02.925843   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:02.936205   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:02.936273   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:02.946716   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:02.946782   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:02.957015   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:02.957079   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:02.975308   18107 logs.go:276] 0 containers: []
	W0328 12:10:02.975317   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:02.975367   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:02.985730   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:02.985748   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:02.985754   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:02.997147   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:02.997160   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:03.011321   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:03.011330   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:03.036503   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:03.036514   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:03.059228   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:03.059237   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:03.077457   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:03.077468   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:03.095714   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:03.095724   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:03.107599   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:03.107610   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:03.118873   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:03.118883   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:03.123066   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:03.123072   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:03.139921   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:03.139931   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:03.154762   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:03.154773   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:03.169380   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:03.169390   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:03.181576   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:03.181587   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:03.218470   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:03.218481   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:03.252443   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:03.252454   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:06.883543   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:06.883663   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:06.897657   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:06.897744   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:06.909250   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:06.909332   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:06.919865   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:06.919942   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:06.931292   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:06.931375   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:06.942538   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:06.942614   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:06.953301   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:06.953380   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:06.963513   17919 logs.go:276] 0 containers: []
	W0328 12:10:06.963526   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:06.963599   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:06.974720   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:06.974736   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:06.974741   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:07.009886   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:07.009900   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:07.014453   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:07.014460   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:07.029040   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:07.029050   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:07.041054   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:07.041064   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:07.065871   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:07.065879   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:07.078085   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:07.078097   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:07.090212   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:07.090222   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:07.106310   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:07.106320   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:07.121362   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:07.121377   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:07.132962   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:07.132973   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:07.144425   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:07.144436   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:07.180495   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:07.180505   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:07.195782   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:07.195792   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:07.211200   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:07.211211   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:09.731236   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:05.764840   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:14.733612   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:14.733855   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:14.764007   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:14.764124   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:10.767286   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:10.767498   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:10.791816   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:10.791894   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:10.804788   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:10.804858   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:10.816299   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:10.816371   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:10.826967   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:10.827038   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:10.840285   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:10.840347   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:10.850942   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:10.851004   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:10.860932   18107 logs.go:276] 0 containers: []
	W0328 12:10:10.860941   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:10.860989   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:10.871735   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:10.871751   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:10.871756   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:10.885117   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:10.885128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:10.895971   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:10.895983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:10.907405   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:10.907415   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:10.921730   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:10.921742   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:10.936550   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:10.936559   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:10.948200   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:10.948209   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:10.964943   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:10.964953   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:10.979549   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:10.979558   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:10.983977   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:10.983983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:11.013023   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:11.013035   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:11.025228   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:11.025238   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:11.048275   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:11.048284   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:11.060343   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:11.060354   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:11.099104   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:11.099114   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:11.133331   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:11.133341   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:13.650118   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:14.779854   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:14.781018   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:14.793721   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:14.793795   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:14.810050   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:14.810119   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:14.820465   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:14.820538   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:14.839117   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:14.839178   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:14.849323   17919 logs.go:276] 0 containers: []
	W0328 12:10:14.849336   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:14.849400   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:14.859846   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:14.859862   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:14.859868   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:14.877059   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:14.877069   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:14.881483   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:14.881492   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:14.892822   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:14.892832   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:14.904610   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:14.904620   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:14.917150   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:14.917161   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:14.928954   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:14.928964   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:14.946774   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:14.946787   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:14.958554   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:14.958566   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:14.984374   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:14.984384   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:15.021002   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:15.021013   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:15.032795   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:15.032805   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:15.044457   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:15.044467   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:15.078662   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:15.078672   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:15.092919   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:15.092929   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:17.608938   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:18.652816   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:18.653019   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:18.677171   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:18.677304   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:18.694614   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:18.694694   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:18.707761   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:18.707835   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:18.719327   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:18.719397   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:18.729474   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:18.729541   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:18.740284   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:18.740347   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:18.751551   18107 logs.go:276] 0 containers: []
	W0328 12:10:18.751561   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:18.751621   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:18.762580   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:18.762597   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:18.762602   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:18.785483   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:18.785494   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:18.797563   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:18.797573   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:18.801587   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:18.801596   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:18.812881   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:18.812894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:18.829872   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:18.829884   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:18.841198   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:18.841208   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:18.854776   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:18.854785   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:18.869578   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:18.869589   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:18.884486   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:18.884496   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:18.920323   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:18.920335   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:18.937707   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:18.937721   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:18.951780   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:18.951794   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:18.989559   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:18.989567   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:19.004293   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:19.004303   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:19.029675   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:19.029686   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:22.611452   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:22.611803   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:22.639804   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:22.639926   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:22.658782   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:22.658877   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:22.677815   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:22.677887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:22.688958   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:22.689024   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:22.699729   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:22.699789   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:22.710999   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:22.711075   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:22.720733   17919 logs.go:276] 0 containers: []
	W0328 12:10:22.720745   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:22.720799   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:22.731283   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:22.731301   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:22.731306   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:22.745677   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:22.745688   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:22.767868   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:22.767878   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:22.786463   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:22.786475   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:22.792662   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:22.792672   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:22.804405   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:22.804415   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:22.819261   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:22.819271   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:22.831517   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:22.831527   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:22.856366   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:22.856374   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:22.871274   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:22.871285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:22.883081   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:22.883095   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:22.924261   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:22.924275   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:22.940276   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:22.940285   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:22.954048   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:22.954057   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:22.966442   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:22.966453   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:21.546733   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:25.502220   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:26.547922   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:26.548183   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:26.574772   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:26.574909   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:26.593587   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:26.593664   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:26.606728   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:26.606807   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:26.618226   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:26.618304   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:26.628770   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:26.628842   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:26.638830   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:26.638903   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:26.649320   18107 logs.go:276] 0 containers: []
	W0328 12:10:26.649330   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:26.649387   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:26.663127   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:26.663145   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:26.663151   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:26.677985   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:26.677995   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:26.691357   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:26.691369   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:26.726045   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:26.726056   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:26.744829   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:26.744839   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:26.765571   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:26.765582   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:26.787122   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:26.787135   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:26.802042   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:26.802053   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:26.806510   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:26.806521   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:26.821074   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:26.821088   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:26.835807   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:26.835820   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:26.849286   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:26.849301   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:26.886165   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:26.886174   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:26.897910   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:26.897920   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:26.921142   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:26.921155   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:26.933232   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:26.933242   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:29.460393   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:30.502823   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:30.503183   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:30.539341   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:30.539482   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:30.561056   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:30.561161   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:30.575482   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:30.575565   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:30.588489   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:30.588580   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:30.600274   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:30.600343   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:30.611269   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:30.611346   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:30.626559   17919 logs.go:276] 0 containers: []
	W0328 12:10:30.626574   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:30.626639   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:30.637690   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:30.637709   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:30.637716   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:30.649331   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:30.649341   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:30.664067   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:30.664083   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:30.701663   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:30.701675   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:30.713100   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:30.713112   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:30.717583   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:30.717590   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:30.729229   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:30.729242   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:30.741398   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:30.741409   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:30.752701   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:30.752711   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:30.773667   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:30.773678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:30.785665   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:30.785678   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:30.800035   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:30.800048   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:30.818175   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:30.818186   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:30.832993   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:30.833003   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:30.858290   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:30.858301   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:33.395006   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:34.463155   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:34.463374   18107 kubeadm.go:591] duration metric: took 4m4.088980625s to restartPrimaryControlPlane
	W0328 12:10:34.463516   18107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 12:10:34.463591   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0328 12:10:35.488825   18107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.025207s)
	I0328 12:10:35.488889   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 12:10:35.493871   18107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:10:35.496735   18107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:10:35.499376   18107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:10:35.499382   18107 kubeadm.go:156] found existing configuration files:
	
	I0328 12:10:35.499403   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf
	I0328 12:10:35.501921   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:10:35.501945   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:10:35.505167   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf
	I0328 12:10:35.508497   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:10:35.508526   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:10:35.511297   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf
	I0328 12:10:35.513847   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:10:35.513868   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:10:35.517046   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf
	I0328 12:10:35.519800   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:10:35.519826   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:10:35.522352   18107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 12:10:35.539678   18107 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0328 12:10:35.539727   18107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 12:10:35.591150   18107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 12:10:35.591204   18107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 12:10:35.591267   18107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 12:10:35.639466   18107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 12:10:35.642685   18107 out.go:204]   - Generating certificates and keys ...
	I0328 12:10:35.642718   18107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 12:10:35.642749   18107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 12:10:35.642786   18107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 12:10:35.642817   18107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 12:10:35.642859   18107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 12:10:35.642888   18107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 12:10:35.642921   18107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 12:10:35.642958   18107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 12:10:35.643001   18107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 12:10:35.643040   18107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 12:10:35.643057   18107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 12:10:35.643087   18107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 12:10:35.675910   18107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 12:10:35.745597   18107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 12:10:35.800624   18107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 12:10:35.840594   18107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 12:10:35.869610   18107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 12:10:35.870096   18107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 12:10:35.870121   18107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 12:10:35.939062   18107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 12:10:38.397346   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:38.397475   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:38.408730   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:38.408807   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:38.419996   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:38.420081   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:38.433473   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:38.433551   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:38.444930   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:38.445005   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:38.455970   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:38.456052   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:38.472014   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:38.472087   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:38.483925   17919 logs.go:276] 0 containers: []
	W0328 12:10:38.483957   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:38.484026   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:38.503273   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:38.503289   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:38.503295   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:38.519051   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:38.519062   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:38.536609   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:38.536621   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:38.573486   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:38.573508   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:38.615100   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:38.615114   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:38.630100   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:38.630111   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:38.642087   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:38.642099   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:38.654477   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:38.654491   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:38.666206   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:38.666216   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:38.670858   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:38.670867   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:38.685238   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:38.685249   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:38.697450   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:38.697462   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:38.711693   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:38.711706   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:38.726058   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:38.726073   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:38.749869   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:38.749879   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:35.942555   18107 out.go:204]   - Booting up control plane ...
	I0328 12:10:35.942599   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 12:10:35.942654   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 12:10:35.942690   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 12:10:35.942729   18107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 12:10:35.942808   18107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 12:10:40.442192   18107 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501072 seconds
	I0328 12:10:40.442283   18107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 12:10:40.446912   18107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 12:10:40.959368   18107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 12:10:40.959562   18107 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-732000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 12:10:41.463903   18107 kubeadm.go:309] [bootstrap-token] Using token: c3fq2i.3w6j4tvs3qwbbusu
	I0328 12:10:41.470174   18107 out.go:204]   - Configuring RBAC rules ...
	I0328 12:10:41.470237   18107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 12:10:41.470289   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 12:10:41.475836   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 12:10:41.476787   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 12:10:41.477539   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 12:10:41.478396   18107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 12:10:41.481418   18107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 12:10:41.640985   18107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 12:10:41.868050   18107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 12:10:41.868466   18107 kubeadm.go:309] 
	I0328 12:10:41.868500   18107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 12:10:41.868503   18107 kubeadm.go:309] 
	I0328 12:10:41.868538   18107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 12:10:41.868551   18107 kubeadm.go:309] 
	I0328 12:10:41.868569   18107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 12:10:41.868603   18107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 12:10:41.868634   18107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 12:10:41.868637   18107 kubeadm.go:309] 
	I0328 12:10:41.868662   18107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 12:10:41.868666   18107 kubeadm.go:309] 
	I0328 12:10:41.868691   18107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 12:10:41.868695   18107 kubeadm.go:309] 
	I0328 12:10:41.868721   18107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 12:10:41.868758   18107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 12:10:41.868804   18107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 12:10:41.868810   18107 kubeadm.go:309] 
	I0328 12:10:41.868856   18107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 12:10:41.868901   18107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 12:10:41.868905   18107 kubeadm.go:309] 
	I0328 12:10:41.868949   18107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token c3fq2i.3w6j4tvs3qwbbusu \
	I0328 12:10:41.869010   18107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b \
	I0328 12:10:41.869020   18107 kubeadm.go:309] 	--control-plane 
	I0328 12:10:41.869024   18107 kubeadm.go:309] 
	I0328 12:10:41.869071   18107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 12:10:41.869073   18107 kubeadm.go:309] 
	I0328 12:10:41.869130   18107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token c3fq2i.3w6j4tvs3qwbbusu \
	I0328 12:10:41.869180   18107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b 
	I0328 12:10:41.869307   18107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 12:10:41.869384   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:10:41.869393   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:10:41.871085   18107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 12:10:41.877794   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 12:10:41.880824   18107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 12:10:41.885518   18107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 12:10:41.885562   18107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 12:10:41.885576   18107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-732000 minikube.k8s.io/updated_at=2024_03_28T12_10_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967 minikube.k8s.io/name=stopped-upgrade-732000 minikube.k8s.io/primary=true
	I0328 12:10:41.888571   18107 ops.go:34] apiserver oom_adj: -16
	I0328 12:10:41.935865   18107 kubeadm.go:1107] duration metric: took 50.337416ms to wait for elevateKubeSystemPrivileges
	W0328 12:10:41.935885   18107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 12:10:41.935888   18107 kubeadm.go:393] duration metric: took 4m11.574379s to StartCluster
	I0328 12:10:41.935898   18107 settings.go:142] acquiring lock: {Name:mkfc1d043149af7cff65561e827dba55cefba229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:10:41.935986   18107 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:10:41.936410   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:10:41.936611   18107 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:10:41.940550   18107 out.go:177] * Verifying Kubernetes components...
	I0328 12:10:41.936623   18107 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 12:10:41.936692   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:10:41.948780   18107 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-732000"
	I0328 12:10:41.948794   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:10:41.948800   18107 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-732000"
	W0328 12:10:41.948803   18107 addons.go:243] addon storage-provisioner should already be in state true
	I0328 12:10:41.948797   18107 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-732000"
	I0328 12:10:41.948831   18107 host.go:66] Checking if "stopped-upgrade-732000" exists ...
	I0328 12:10:41.948833   18107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-732000"
	I0328 12:10:41.952756   18107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:10:41.263849   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:41.955793   18107 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:10:41.955799   18107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 12:10:41.955805   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:10:41.956901   18107 kapi.go:59] client config for stopped-upgrade-732000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043d2d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:10:41.957019   18107 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-732000"
	W0328 12:10:41.957026   18107 addons.go:243] addon default-storageclass should already be in state true
	I0328 12:10:41.957036   18107 host.go:66] Checking if "stopped-upgrade-732000" exists ...
	I0328 12:10:41.958039   18107 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 12:10:41.958054   18107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 12:10:41.958066   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:10:42.025108   18107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:10:42.030317   18107 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:10:42.030356   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:10:42.034239   18107 api_server.go:72] duration metric: took 97.616625ms to wait for apiserver process to appear ...
	I0328 12:10:42.034247   18107 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:10:42.034254   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:42.065128   18107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:10:42.079949   18107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 12:10:46.266177   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:46.266450   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:46.293982   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:46.294092   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:46.310881   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:46.310969   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:46.323293   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:46.323368   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:46.334486   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:46.334551   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:46.345259   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:46.345329   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:46.356024   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:46.356101   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:46.369004   17919 logs.go:276] 0 containers: []
	W0328 12:10:46.369016   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:46.369074   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:46.379982   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:46.379998   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:46.380003   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:46.392609   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:46.392622   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:46.414704   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:46.414716   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:46.425606   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:46.425619   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:46.439309   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:46.439320   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:46.463365   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:46.463373   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:46.467744   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:46.467750   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:46.480119   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:46.480132   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:46.492124   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:46.492135   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:46.526581   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:46.526592   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:46.541320   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:46.541332   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:46.552833   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:46.552843   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:46.568228   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:46.568240   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:46.580444   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:46.580457   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:46.599037   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:46.599048   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:49.135880   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:47.036378   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:47.036404   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:54.138457   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:54.138548   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:54.149576   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:10:54.149641   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:54.159973   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:10:54.160032   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:54.171249   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:10:54.171330   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:54.184169   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:10:54.184238   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:54.196268   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:10:54.196339   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:54.206589   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:10:54.206652   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:54.218270   17919 logs.go:276] 0 containers: []
	W0328 12:10:54.218283   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:54.218347   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:54.229261   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:10:54.229283   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:10:54.229288   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:10:54.243747   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:10:54.243757   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:10:54.258051   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:10:54.258062   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:10:54.269918   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:10:54.269928   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:10:54.287310   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:54.287319   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:54.323390   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:54.323402   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:54.359005   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:10:54.359019   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:10:54.374527   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:54.374537   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:54.397759   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:10:54.397767   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:54.409739   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:10:54.409750   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:10:54.421519   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:10:54.421531   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:10:54.433073   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:10:54.433083   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:10:54.445061   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:10:54.445072   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:10:54.456480   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:54.456490   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:54.461317   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:10:54.461324   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:10:52.036698   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:52.036738   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:56.980881   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:57.037075   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:57.037114   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:01.983218   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:01.983467   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:02.004017   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:02.004124   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:02.021431   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:02.021510   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:02.033699   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:02.033772   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:02.045260   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:02.045333   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:02.056310   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:02.056381   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:02.067029   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:02.067094   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:02.077818   17919 logs.go:276] 0 containers: []
	W0328 12:11:02.077828   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:02.077887   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:02.107430   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:02.107451   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:02.107456   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:02.123578   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:02.123595   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:02.158274   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:02.158294   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:02.193279   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:02.193289   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:02.206018   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:02.206030   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:02.218133   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:02.218144   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:02.229798   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:02.229808   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:02.244353   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:02.244364   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:02.255854   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:02.255865   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:02.267149   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:02.267160   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:02.281774   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:02.281783   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:02.299370   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:02.299381   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:02.311173   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:02.311182   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:02.315695   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:02.315703   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:02.333434   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:02.333444   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:02.037509   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:02.037524   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:04.859703   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:07.038010   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:07.038034   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:12.038674   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:12.038716   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0328 12:11:12.446155   18107 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0328 12:11:12.450568   18107 out.go:177] * Enabled addons: storage-provisioner
	I0328 12:11:09.862056   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:09.862263   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:09.880729   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:09.880810   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:09.897267   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:09.897349   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:09.908845   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:09.908912   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:09.919831   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:09.919896   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:09.930434   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:09.930508   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:09.940801   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:09.940873   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:09.951069   17919 logs.go:276] 0 containers: []
	W0328 12:11:09.951080   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:09.951133   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:09.961399   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:09.961416   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:09.961421   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:09.975862   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:09.975871   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:09.993890   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:09.993900   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:10.019184   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:10.019197   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:10.030939   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:10.030952   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:10.066208   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:10.066220   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:10.071351   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:10.071358   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:10.083139   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:10.083150   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:10.095071   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:10.095085   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:10.110442   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:10.110455   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:10.122410   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:10.122420   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:10.134191   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:10.134202   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:10.146158   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:10.146168   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:10.180113   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:10.180127   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:10.194520   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:10.194533   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:12.708104   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:12.459521   18107 addons.go:505] duration metric: took 30.522538917s for enable addons: enabled=[storage-provisioner]
	I0328 12:11:17.710481   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:17.710666   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:17.732269   17919 logs.go:276] 1 containers: [67239a430e57]
	I0328 12:11:17.732366   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:17.747767   17919 logs.go:276] 1 containers: [50335decc273]
	I0328 12:11:17.747841   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:17.760862   17919 logs.go:276] 4 containers: [29d16be6a40d d932cd05f970 9277f2572ab3 4bd185c8dcf8]
	I0328 12:11:17.760939   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:17.771652   17919 logs.go:276] 1 containers: [8124ae123a84]
	I0328 12:11:17.771718   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:17.781721   17919 logs.go:276] 1 containers: [2ef56f733809]
	I0328 12:11:17.781793   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:17.792641   17919 logs.go:276] 1 containers: [480dbd1df7aa]
	I0328 12:11:17.792711   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:17.802751   17919 logs.go:276] 0 containers: []
	W0328 12:11:17.802761   17919 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:17.802812   17919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:17.813370   17919 logs.go:276] 1 containers: [bd9e4606aec2]
	I0328 12:11:17.813387   17919 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:17.813391   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:17.837159   17919 logs.go:123] Gathering logs for container status ...
	I0328 12:11:17.837168   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:17.849657   17919 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:17.849670   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:17.886859   17919 logs.go:123] Gathering logs for coredns [d932cd05f970] ...
	I0328 12:11:17.886880   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d932cd05f970"
	I0328 12:11:17.900651   17919 logs.go:123] Gathering logs for kube-proxy [2ef56f733809] ...
	I0328 12:11:17.900676   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef56f733809"
	I0328 12:11:17.913768   17919 logs.go:123] Gathering logs for kube-apiserver [67239a430e57] ...
	I0328 12:11:17.913783   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67239a430e57"
	I0328 12:11:17.933268   17919 logs.go:123] Gathering logs for coredns [29d16be6a40d] ...
	I0328 12:11:17.933279   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29d16be6a40d"
	I0328 12:11:17.944887   17919 logs.go:123] Gathering logs for kube-controller-manager [480dbd1df7aa] ...
	I0328 12:11:17.944898   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480dbd1df7aa"
	I0328 12:11:17.962974   17919 logs.go:123] Gathering logs for kube-scheduler [8124ae123a84] ...
	I0328 12:11:17.962984   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8124ae123a84"
	I0328 12:11:17.977543   17919 logs.go:123] Gathering logs for storage-provisioner [bd9e4606aec2] ...
	I0328 12:11:17.977551   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9e4606aec2"
	I0328 12:11:17.989255   17919 logs.go:123] Gathering logs for etcd [50335decc273] ...
	I0328 12:11:17.989267   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50335decc273"
	I0328 12:11:18.003202   17919 logs.go:123] Gathering logs for coredns [9277f2572ab3] ...
	I0328 12:11:18.003212   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9277f2572ab3"
	I0328 12:11:18.021352   17919 logs.go:123] Gathering logs for coredns [4bd185c8dcf8] ...
	I0328 12:11:18.021365   17919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bd185c8dcf8"
	I0328 12:11:18.033200   17919 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:18.033211   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:18.037758   17919 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:18.037766   17919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:17.039955   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:17.039999   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:20.574840   17919 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:25.577353   17919 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:25.580701   17919 out.go:177] 
	W0328 12:11:25.584518   17919 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0328 12:11:25.584528   17919 out.go:239] * 
	W0328 12:11:25.585263   17919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:11:25.594579   17919 out.go:177] 
	I0328 12:11:22.041216   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:22.041240   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:27.042593   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:27.042615   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:32.044250   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:32.044289   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:37.046619   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:37.046680   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-03-28 19:02:26 UTC, ends at Thu 2024-03-28 19:11:41 UTC. --
	Mar 28 19:11:26 running-upgrade-623000 dockerd[3244]: time="2024-03-28T19:11:26.971139537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 19:11:26 running-upgrade-623000 dockerd[3244]: time="2024-03-28T19:11:26.971166204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 19:11:26 running-upgrade-623000 dockerd[3244]: time="2024-03-28T19:11:26.971255247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 19:11:26 running-upgrade-623000 dockerd[3244]: time="2024-03-28T19:11:26.971348789Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8278de306ae837921ee52760151c7177f2f538c61c9629c8dfefc95b75c26cda pid=18415 runtime=io.containerd.runc.v2
	Mar 28 19:11:27 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:27Z" level=error msg="ContainerStats resp: {0x400092dd00 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x400088b240 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x400088b380 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x4000812a00 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x400088bd80 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x4000516040 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x4000516140 linux}"
	Mar 28 19:11:28 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:28Z" level=error msg="ContainerStats resp: {0x4000812300 linux}"
	Mar 28 19:11:29 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 28 19:11:34 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 28 19:11:38 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:38Z" level=error msg="ContainerStats resp: {0x400074b600 linux}"
	Mar 28 19:11:38 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:38Z" level=error msg="ContainerStats resp: {0x4000a0e240 linux}"
	Mar 28 19:11:39 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:39Z" level=error msg="ContainerStats resp: {0x4000a0ec80 linux}"
	Mar 28 19:11:39 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400035a7c0 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x4000a0ff40 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400035b5c0 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400092cb00 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400092ccc0 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400055a800 linux}"
	Mar 28 19:11:40 running-upgrade-623000 cri-dockerd[3085]: time="2024-03-28T19:11:40Z" level=error msg="ContainerStats resp: {0x400092d740 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	865a26caf5035       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   ed373e361d678
	8278de306ae83       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   18db08fe76929
	29d16be6a40da       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   18db08fe76929
	d932cd05f9702       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ed373e361d678
	2ef56f7338095       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   d29e224a88ffe
	bd9e4606aec21       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f4e56503f1293
	67239a430e57e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   37e1b2cd50d35
	50335decc273b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   bf2114d36b4dc
	8124ae123a844       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   008eb369cb7d8
	480dbd1df7aa3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   dd2e45423be72
	
	
	==> coredns [29d16be6a40d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:35579->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:38384->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:58727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:51080->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:33715->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:50073->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:48442->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1681018176703512941.5136663934741206343. HINFO: read udp 10.244.0.2:33071->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8278de306ae8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8900527335275525524.5552371225004585793. HINFO: read udp 10.244.0.2:47103->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8900527335275525524.5552371225004585793. HINFO: read udp 10.244.0.2:49691->10.0.2.3:53: i/o timeout
	
	
	==> coredns [865a26caf503] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2317226483296227969.7977776978394913157. HINFO: read udp 10.244.0.3:52289->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2317226483296227969.7977776978394913157. HINFO: read udp 10.244.0.3:55482->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d932cd05f970] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:39011->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:59002->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:49981->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:42516->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:36932->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:59240->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:51093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:55888->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:41823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332560815004162185.1245783154903460860. HINFO: read udp 10.244.0.3:36616->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-623000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-623000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967
	                    minikube.k8s.io/name=running-upgrade-623000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T12_07_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 19:07:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-623000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 19:11:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 19:07:24 +0000   Thu, 28 Mar 2024 19:07:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 19:07:24 +0000   Thu, 28 Mar 2024 19:07:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 19:07:24 +0000   Thu, 28 Mar 2024 19:07:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 19:07:24 +0000   Thu, 28 Mar 2024 19:07:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-623000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd98ce99395f4218bdf0212aa8f44183
	  System UUID:                cd98ce99395f4218bdf0212aa8f44183
	  Boot ID:                    74c71521-5ca6-46ee-bed7-38b033006054
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-gdlf6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-q4z2x                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-623000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-623000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-623000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-sq5t8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-623000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-623000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-623000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-623000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-623000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-623000 event: Registered Node running-upgrade-623000 in Controller
	
	
	==> dmesg <==
	[  +1.868932] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.069579] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.065413] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.139131] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.069995] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.065426] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +3.360551] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +14.170308] systemd-fstab-generator[1961]: Ignoring "noauto" for root device
	[  +2.632934] systemd-fstab-generator[2241]: Ignoring "noauto" for root device
	[Mar28 19:03] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.081873] systemd-fstab-generator[2286]: Ignoring "noauto" for root device
	[  +0.090989] systemd-fstab-generator[2299]: Ignoring "noauto" for root device
	[  +2.502464] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.192465] systemd-fstab-generator[3040]: Ignoring "noauto" for root device
	[  +0.060243] systemd-fstab-generator[3053]: Ignoring "noauto" for root device
	[  +0.088250] systemd-fstab-generator[3064]: Ignoring "noauto" for root device
	[  +0.089737] systemd-fstab-generator[3078]: Ignoring "noauto" for root device
	[  +2.755646] systemd-fstab-generator[3231]: Ignoring "noauto" for root device
	[  +5.496842] systemd-fstab-generator[3640]: Ignoring "noauto" for root device
	[  +1.185176] systemd-fstab-generator[3905]: Ignoring "noauto" for root device
	[ +18.373614] kauditd_printk_skb: 68 callbacks suppressed
	[Mar28 19:04] kauditd_printk_skb: 21 callbacks suppressed
	[Mar28 19:07] systemd-fstab-generator[11643]: Ignoring "noauto" for root device
	[  +5.631924] systemd-fstab-generator[12232]: Ignoring "noauto" for root device
	[  +0.469224] systemd-fstab-generator[12379]: Ignoring "noauto" for root device
	
	
	==> etcd [50335decc273] <==
	{"level":"info","ts":"2024-03-28T19:07:20.430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-28T19:07:20.434Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-28T19:07:20.452Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T19:07:20.452Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-28T19:07:20.452Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-28T19:07:20.452Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T19:07:20.452Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-623000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T19:07:20.486Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T19:07:20.487Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T19:07:20.487Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T19:07:20.495Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-28T19:07:20.495Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T19:07:20.495Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T19:07:20.487Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T19:07:20.502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T19:07:20.502Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T19:07:20.502Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:11:42 up 9 min,  0 users,  load average: 0.16, 0.41, 0.25
	Linux running-upgrade-623000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [67239a430e57] <==
	I0328 19:07:22.071473       1 controller.go:611] quota admission added evaluator for: namespaces
	I0328 19:07:22.090368       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0328 19:07:22.118579       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 19:07:22.118633       1 cache.go:39] Caches are synced for autoregister controller
	I0328 19:07:22.118661       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0328 19:07:22.119508       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0328 19:07:22.120361       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 19:07:22.861392       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0328 19:07:23.026457       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0328 19:07:23.030503       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0328 19:07:23.031102       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 19:07:23.165928       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 19:07:23.178743       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 19:07:23.277092       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0328 19:07:23.280369       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0328 19:07:23.280759       1 controller.go:611] quota admission added evaluator for: endpoints
	I0328 19:07:23.282108       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 19:07:24.158728       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0328 19:07:24.623340       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0328 19:07:24.626498       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0328 19:07:24.633471       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0328 19:07:24.675235       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 19:07:37.927629       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0328 19:07:38.328189       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0328 19:07:39.993929       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [480dbd1df7aa] <==
	I0328 19:07:37.826926       1 shared_informer.go:262] Caches are synced for TTL
	I0328 19:07:37.827282       1 shared_informer.go:262] Caches are synced for service account
	I0328 19:07:37.827831       1 shared_informer.go:262] Caches are synced for ephemeral
	I0328 19:07:37.862643       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0328 19:07:37.872472       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 19:07:37.872740       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 19:07:37.872754       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 19:07:37.872763       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 19:07:37.873835       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0328 19:07:37.876524       1 shared_informer.go:262] Caches are synced for endpoint
	I0328 19:07:37.909109       1 shared_informer.go:262] Caches are synced for persistent volume
	I0328 19:07:37.910173       1 shared_informer.go:262] Caches are synced for attach detach
	I0328 19:07:37.926855       1 shared_informer.go:262] Caches are synced for expand
	I0328 19:07:37.929852       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sq5t8"
	I0328 19:07:37.976356       1 shared_informer.go:262] Caches are synced for PV protection
	I0328 19:07:37.981367       1 shared_informer.go:262] Caches are synced for resource quota
	I0328 19:07:38.026116       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0328 19:07:38.027191       1 shared_informer.go:262] Caches are synced for crt configmap
	I0328 19:07:38.029294       1 shared_informer.go:262] Caches are synced for resource quota
	I0328 19:07:38.329672       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0328 19:07:38.453996       1 shared_informer.go:262] Caches are synced for garbage collector
	I0328 19:07:38.475739       1 shared_informer.go:262] Caches are synced for garbage collector
	I0328 19:07:38.475780       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0328 19:07:38.829163       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gdlf6"
	I0328 19:07:38.834477       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-q4z2x"
	
	
	==> kube-proxy [2ef56f733809] <==
	I0328 19:07:39.964838       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0328 19:07:39.964867       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0328 19:07:39.964884       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0328 19:07:39.991849       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0328 19:07:39.991859       1 server_others.go:206] "Using iptables Proxier"
	I0328 19:07:39.991899       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0328 19:07:39.992128       1 server.go:661] "Version info" version="v1.24.1"
	I0328 19:07:39.992137       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 19:07:39.992753       1 config.go:317] "Starting service config controller"
	I0328 19:07:39.992759       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0328 19:07:39.992768       1 config.go:226] "Starting endpoint slice config controller"
	I0328 19:07:39.992769       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0328 19:07:39.992973       1 config.go:444] "Starting node config controller"
	I0328 19:07:39.992975       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0328 19:07:40.093279       1 shared_informer.go:262] Caches are synced for service config
	I0328 19:07:40.093279       1 shared_informer.go:262] Caches are synced for node config
	I0328 19:07:40.093298       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8124ae123a84] <==
	W0328 19:07:22.070695       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 19:07:22.071544       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 19:07:22.070708       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 19:07:22.071671       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 19:07:22.070827       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 19:07:22.071766       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 19:07:22.071786       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 19:07:22.071842       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 19:07:22.070881       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 19:07:22.071872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 19:07:22.072089       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 19:07:22.072163       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 19:07:22.070857       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 19:07:22.072198       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 19:07:22.912647       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 19:07:22.912713       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 19:07:22.925914       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 19:07:22.925993       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 19:07:22.954136       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 19:07:22.954189       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 19:07:22.990989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 19:07:22.991028       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 19:07:23.042172       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 19:07:23.042345       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 19:07:26.066436       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-03-28 19:02:26 UTC, ends at Thu 2024-03-28 19:11:42 UTC. --
	Mar 28 19:07:37 running-upgrade-623000 kubelet[12255]: E0328 19:07:37.894377   12255 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/16e92f6d-4a04-4141-a40e-c456cde71208-kube-api-access-rtmg2 podName:16e92f6d-4a04-4141-a40e-c456cde71208 nodeName:}" failed. No retries permitted until 2024-03-28 19:07:38.394363564 +0000 UTC m=+13.783834895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rtmg2" (UniqueName: "kubernetes.io/projected/16e92f6d-4a04-4141-a40e-c456cde71208-kube-api-access-rtmg2") pod "storage-provisioner" (UID: "16e92f6d-4a04-4141-a40e-c456cde71208") : configmap "kube-root-ca.crt" not found
	Mar 28 19:07:37 running-upgrade-623000 kubelet[12255]: I0328 19:07:37.932649   12255 topology_manager.go:200] "Topology Admit Handler"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.093171   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f135ccd7-3d8b-432f-96eb-a3add00ed424-xtables-lock\") pod \"kube-proxy-sq5t8\" (UID: \"f135ccd7-3d8b-432f-96eb-a3add00ed424\") " pod="kube-system/kube-proxy-sq5t8"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.093263   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f135ccd7-3d8b-432f-96eb-a3add00ed424-lib-modules\") pod \"kube-proxy-sq5t8\" (UID: \"f135ccd7-3d8b-432f-96eb-a3add00ed424\") " pod="kube-system/kube-proxy-sq5t8"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.093297   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-proxy\") pod \"kube-proxy-sq5t8\" (UID: \"f135ccd7-3d8b-432f-96eb-a3add00ed424\") " pod="kube-system/kube-proxy-sq5t8"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.093313   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swt7k\" (UniqueName: \"kubernetes.io/projected/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-api-access-swt7k\") pod \"kube-proxy-sq5t8\" (UID: \"f135ccd7-3d8b-432f-96eb-a3add00ed424\") " pod="kube-system/kube-proxy-sq5t8"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.196294   12255 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.196310   12255 projected.go:192] Error preparing data for projected volume kube-api-access-swt7k for pod kube-system/kube-proxy-sq5t8: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.196406   12255 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-api-access-swt7k podName:f135ccd7-3d8b-432f-96eb-a3add00ed424 nodeName:}" failed. No retries permitted until 2024-03-28 19:07:38.69632665 +0000 UTC m=+14.085797981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-swt7k" (UniqueName: "kubernetes.io/projected/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-api-access-swt7k") pod "kube-proxy-sq5t8" (UID: "f135ccd7-3d8b-432f-96eb-a3add00ed424") : configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.394704   12255 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.394725   12255 projected.go:192] Error preparing data for projected volume kube-api-access-rtmg2 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.394752   12255 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/16e92f6d-4a04-4141-a40e-c456cde71208-kube-api-access-rtmg2 podName:16e92f6d-4a04-4141-a40e-c456cde71208 nodeName:}" failed. No retries permitted until 2024-03-28 19:07:39.394743146 +0000 UTC m=+14.784214435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rtmg2" (UniqueName: "kubernetes.io/projected/16e92f6d-4a04-4141-a40e-c456cde71208-kube-api-access-rtmg2") pod "storage-provisioner" (UID: "16e92f6d-4a04-4141-a40e-c456cde71208") : configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.697779   12255 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.697798   12255 projected.go:192] Error preparing data for projected volume kube-api-access-swt7k for pod kube-system/kube-proxy-sq5t8: configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: E0328 19:07:38.697825   12255 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-api-access-swt7k podName:f135ccd7-3d8b-432f-96eb-a3add00ed424 nodeName:}" failed. No retries permitted until 2024-03-28 19:07:39.697815239 +0000 UTC m=+15.087286570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-swt7k" (UniqueName: "kubernetes.io/projected/f135ccd7-3d8b-432f-96eb-a3add00ed424-kube-api-access-swt7k") pod "kube-proxy-sq5t8" (UID: "f135ccd7-3d8b-432f-96eb-a3add00ed424") : configmap "kube-root-ca.crt" not found
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.831638   12255 topology_manager.go:200] "Topology Admit Handler"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.835440   12255 topology_manager.go:200] "Topology Admit Handler"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.999628   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0c95e5-a49b-425d-a36b-7025abd744f1-config-volume\") pod \"coredns-6d4b75cb6d-gdlf6\" (UID: \"5e0c95e5-a49b-425d-a36b-7025abd744f1\") " pod="kube-system/coredns-6d4b75cb6d-gdlf6"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.999751   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zsv\" (UniqueName: \"kubernetes.io/projected/5e0c95e5-a49b-425d-a36b-7025abd744f1-kube-api-access-h4zsv\") pod \"coredns-6d4b75cb6d-gdlf6\" (UID: \"5e0c95e5-a49b-425d-a36b-7025abd744f1\") " pod="kube-system/coredns-6d4b75cb6d-gdlf6"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.999777   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmpxf\" (UniqueName: \"kubernetes.io/projected/2bf6e120-7e17-4c73-96e1-83329145f6f9-kube-api-access-dmpxf\") pod \"coredns-6d4b75cb6d-q4z2x\" (UID: \"2bf6e120-7e17-4c73-96e1-83329145f6f9\") " pod="kube-system/coredns-6d4b75cb6d-q4z2x"
	Mar 28 19:07:38 running-upgrade-623000 kubelet[12255]: I0328 19:07:38.999793   12255 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf6e120-7e17-4c73-96e1-83329145f6f9-config-volume\") pod \"coredns-6d4b75cb6d-q4z2x\" (UID: \"2bf6e120-7e17-4c73-96e1-83329145f6f9\") " pod="kube-system/coredns-6d4b75cb6d-q4z2x"
	Mar 28 19:07:39 running-upgrade-623000 kubelet[12255]: I0328 19:07:39.936098   12255 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="18db08fe769296e5afa332cb5e4bc13ded7434534b1bfe827bbafed4748cb712"
	Mar 28 19:07:39 running-upgrade-623000 kubelet[12255]: I0328 19:07:39.940762   12255 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d29e224a88ffed642da945ef98c3997dae90f9776b4819f2247e0f95ac98575f"
	Mar 28 19:11:27 running-upgrade-623000 kubelet[12255]: I0328 19:11:27.114876   12255 scope.go:110] "RemoveContainer" containerID="9277f2572ab3d16dddfb1236527850bad0c9b828845b8e549b5ba5feacff35f6"
	Mar 28 19:11:27 running-upgrade-623000 kubelet[12255]: I0328 19:11:27.126327   12255 scope.go:110] "RemoveContainer" containerID="4bd185c8dcf81b3df0b7c261a129ef5524ecf4a1cb0bf31e2392298210655181"
	
	
	==> storage-provisioner [bd9e4606aec2] <==
	I0328 19:07:39.745242       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 19:07:39.751175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 19:07:39.751193       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 19:07:39.759236       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 19:07:39.759379       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68bf9c42-a788-4503-a01a-9756362273a3", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-623000_8a84deec-219b-414b-a2e8-d6ec474c6e95 became leader
	I0328 19:07:39.759393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-623000_8a84deec-219b-414b-a2e8-d6ec474c6e95!
	I0328 19:07:39.860214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-623000_8a84deec-219b-414b-a2e8-d6ec474c6e95!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-623000 -n running-upgrade-623000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-623000 -n running-upgrade-623000: exit status 2 (15.744770458s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-623000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-623000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-623000: (2.167074584s)
--- FAIL: TestRunningBinaryUpgrade (626.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.096851792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-850000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-850000" primary control-plane node in "kubernetes-upgrade-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:04:36.790578   18014 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:04:36.790708   18014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:04:36.790712   18014 out.go:304] Setting ErrFile to fd 2...
	I0328 12:04:36.790714   18014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:04:36.790848   18014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:04:36.792060   18014 out.go:298] Setting JSON to false
	I0328 12:04:36.808865   18014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11048,"bootTime":1711641628,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:04:36.808928   18014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:04:36.815757   18014 out.go:177] * [kubernetes-upgrade-850000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:04:36.823586   18014 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:04:36.828731   18014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:04:36.823657   18014 notify.go:220] Checking for updates...
	I0328 12:04:36.835661   18014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:04:36.839752   18014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:04:36.842752   18014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:04:36.845739   18014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:04:36.849080   18014 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:04:36.849146   18014 config.go:182] Loaded profile config "running-upgrade-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:04:36.849192   18014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:04:36.853524   18014 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:04:36.860711   18014 start.go:297] selected driver: qemu2
	I0328 12:04:36.860717   18014 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:04:36.860725   18014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:04:36.863031   18014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:04:36.867546   18014 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:04:36.871779   18014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 12:04:36.871812   18014 cni.go:84] Creating CNI manager for ""
	I0328 12:04:36.871819   18014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0328 12:04:36.871852   18014 start.go:340] cluster config:
	{Name:kubernetes-upgrade-850000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:04:36.876335   18014 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:04:36.888764   18014 out.go:177] * Starting "kubernetes-upgrade-850000" primary control-plane node in "kubernetes-upgrade-850000" cluster
	I0328 12:04:36.892731   18014 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 12:04:36.892746   18014 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 12:04:36.892756   18014 cache.go:56] Caching tarball of preloaded images
	I0328 12:04:36.892815   18014 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:04:36.892821   18014 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0328 12:04:36.892885   18014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kubernetes-upgrade-850000/config.json ...
	I0328 12:04:36.892898   18014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kubernetes-upgrade-850000/config.json: {Name:mk60102cf88046f701286160cbd846294ecb0e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:04:36.893241   18014 start.go:360] acquireMachinesLock for kubernetes-upgrade-850000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:04:36.893277   18014 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "kubernetes-upgrade-850000"
	I0328 12:04:36.893291   18014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:04:36.893316   18014 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:04:36.897725   18014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:04:36.923737   18014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-850000" (driver="qemu2")
	I0328 12:04:36.923775   18014 client.go:168] LocalClient.Create starting
	I0328 12:04:36.923841   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:04:36.923871   18014 main.go:141] libmachine: Decoding PEM data...
	I0328 12:04:36.923879   18014 main.go:141] libmachine: Parsing certificate...
	I0328 12:04:36.923925   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:04:36.923946   18014 main.go:141] libmachine: Decoding PEM data...
	I0328 12:04:36.923953   18014 main.go:141] libmachine: Parsing certificate...
	I0328 12:04:36.924300   18014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:04:37.083164   18014 main.go:141] libmachine: Creating SSH key...
	I0328 12:04:37.135709   18014 main.go:141] libmachine: Creating Disk image...
	I0328 12:04:37.135715   18014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:04:37.135889   18014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:37.156588   18014 main.go:141] libmachine: STDOUT: 
	I0328 12:04:37.156609   18014 main.go:141] libmachine: STDERR: 
	I0328 12:04:37.156676   18014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2 +20000M
	I0328 12:04:37.168166   18014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:04:37.168180   18014 main.go:141] libmachine: STDERR: 
	I0328 12:04:37.168202   18014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:37.168209   18014 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:04:37.168245   18014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:34:41:85:56:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:37.170041   18014 main.go:141] libmachine: STDOUT: 
	I0328 12:04:37.170057   18014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:04:37.170078   18014 client.go:171] duration metric: took 246.295084ms to LocalClient.Create
	I0328 12:04:39.172185   18014 start.go:128] duration metric: took 2.278836916s to createHost
	I0328 12:04:39.172202   18014 start.go:83] releasing machines lock for "kubernetes-upgrade-850000", held for 2.278893125s
	W0328 12:04:39.172219   18014 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:04:39.185990   18014 out.go:177] * Deleting "kubernetes-upgrade-850000" in qemu2 ...
	W0328 12:04:39.196312   18014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:04:39.196321   18014 start.go:728] Will try again in 5 seconds ...
	I0328 12:04:44.198569   18014 start.go:360] acquireMachinesLock for kubernetes-upgrade-850000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:04:44.199105   18014 start.go:364] duration metric: took 439.375µs to acquireMachinesLock for "kubernetes-upgrade-850000"
	I0328 12:04:44.199260   18014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:04:44.199538   18014 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:04:44.210241   18014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:04:44.260760   18014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-850000" (driver="qemu2")
	I0328 12:04:44.260839   18014 client.go:168] LocalClient.Create starting
	I0328 12:04:44.260998   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:04:44.261067   18014 main.go:141] libmachine: Decoding PEM data...
	I0328 12:04:44.261086   18014 main.go:141] libmachine: Parsing certificate...
	I0328 12:04:44.261150   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:04:44.261191   18014 main.go:141] libmachine: Decoding PEM data...
	I0328 12:04:44.261204   18014 main.go:141] libmachine: Parsing certificate...
	I0328 12:04:44.261784   18014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:04:44.422193   18014 main.go:141] libmachine: Creating SSH key...
	I0328 12:04:44.794274   18014 main.go:141] libmachine: Creating Disk image...
	I0328 12:04:44.794288   18014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:04:44.794515   18014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:44.807432   18014 main.go:141] libmachine: STDOUT: 
	I0328 12:04:44.807457   18014 main.go:141] libmachine: STDERR: 
	I0328 12:04:44.807537   18014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2 +20000M
	I0328 12:04:44.818861   18014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:04:44.818891   18014 main.go:141] libmachine: STDERR: 
	I0328 12:04:44.818904   18014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:44.818908   18014 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:04:44.818944   18014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:5e:c6:27:e2:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:44.820807   18014 main.go:141] libmachine: STDOUT: 
	I0328 12:04:44.820824   18014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:04:44.820837   18014 client.go:171] duration metric: took 559.9805ms to LocalClient.Create
	I0328 12:04:46.823004   18014 start.go:128] duration metric: took 2.623408209s to createHost
	I0328 12:04:46.823037   18014 start.go:83] releasing machines lock for "kubernetes-upgrade-850000", held for 2.623877875s
	W0328 12:04:46.823279   18014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:04:46.833522   18014 out.go:177] 
	W0328 12:04:46.836579   18014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:04:46.836596   18014 out.go:239] * 
	* 
	W0328 12:04:46.837595   18014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:04:46.848553   18014 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-850000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-850000: (3.261647542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-850000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-850000 status --format={{.Host}}: exit status 7 (60.072625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175933208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-850000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-850000" primary control-plane node in "kubernetes-upgrade-850000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:04:50.211848   18051 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:04:50.211988   18051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:04:50.211992   18051 out.go:304] Setting ErrFile to fd 2...
	I0328 12:04:50.211994   18051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:04:50.212127   18051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:04:50.213093   18051 out.go:298] Setting JSON to false
	I0328 12:04:50.229125   18051 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11062,"bootTime":1711641628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:04:50.229186   18051 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:04:50.233909   18051 out.go:177] * [kubernetes-upgrade-850000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:04:50.241136   18051 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:04:50.241206   18051 notify.go:220] Checking for updates...
	I0328 12:04:50.245026   18051 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:04:50.249024   18051 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:04:50.252079   18051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:04:50.255985   18051 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:04:50.259091   18051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:04:50.262394   18051 config.go:182] Loaded profile config "kubernetes-upgrade-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0328 12:04:50.262674   18051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:04:50.266974   18051 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:04:50.274100   18051 start.go:297] selected driver: qemu2
	I0328 12:04:50.274105   18051 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:04:50.274159   18051 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:04:50.276402   18051 cni.go:84] Creating CNI manager for ""
	I0328 12:04:50.276421   18051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:04:50.276443   18051 start.go:340] cluster config:
	{Name:kubernetes-upgrade-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-850000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:04:50.280585   18051 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:04:50.285107   18051 out.go:177] * Starting "kubernetes-upgrade-850000" primary control-plane node in "kubernetes-upgrade-850000" cluster
	I0328 12:04:50.292042   18051 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 12:04:50.292058   18051 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0328 12:04:50.292069   18051 cache.go:56] Caching tarball of preloaded images
	I0328 12:04:50.292124   18051 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:04:50.292132   18051 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0328 12:04:50.292186   18051 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kubernetes-upgrade-850000/config.json ...
	I0328 12:04:50.292666   18051 start.go:360] acquireMachinesLock for kubernetes-upgrade-850000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:04:50.292692   18051 start.go:364] duration metric: took 19.459µs to acquireMachinesLock for "kubernetes-upgrade-850000"
	I0328 12:04:50.292700   18051 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:04:50.292705   18051 fix.go:54] fixHost starting: 
	I0328 12:04:50.292816   18051 fix.go:112] recreateIfNeeded on kubernetes-upgrade-850000: state=Stopped err=<nil>
	W0328 12:04:50.292823   18051 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:04:50.296130   18051 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-850000" ...
	I0328 12:04:50.303032   18051 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:5e:c6:27:e2:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:50.304913   18051 main.go:141] libmachine: STDOUT: 
	I0328 12:04:50.304939   18051 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:04:50.304966   18051 fix.go:56] duration metric: took 12.261208ms for fixHost
	I0328 12:04:50.304970   18051 start.go:83] releasing machines lock for "kubernetes-upgrade-850000", held for 12.27475ms
	W0328 12:04:50.304977   18051 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:04:50.305014   18051 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:04:50.305018   18051 start.go:728] Will try again in 5 seconds ...
	I0328 12:04:55.305744   18051 start.go:360] acquireMachinesLock for kubernetes-upgrade-850000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:04:55.305943   18051 start.go:364] duration metric: took 159.875µs to acquireMachinesLock for "kubernetes-upgrade-850000"
	I0328 12:04:55.306005   18051 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:04:55.306015   18051 fix.go:54] fixHost starting: 
	I0328 12:04:55.306253   18051 fix.go:112] recreateIfNeeded on kubernetes-upgrade-850000: state=Stopped err=<nil>
	W0328 12:04:55.306262   18051 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:04:55.315469   18051 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-850000" ...
	I0328 12:04:55.319617   18051 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:5e:c6:27:e2:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubernetes-upgrade-850000/disk.qcow2
	I0328 12:04:55.323425   18051 main.go:141] libmachine: STDOUT: 
	I0328 12:04:55.323463   18051 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:04:55.323490   18051 fix.go:56] duration metric: took 17.476625ms for fixHost
	I0328 12:04:55.323496   18051 start.go:83] releasing machines lock for "kubernetes-upgrade-850000", held for 17.541958ms
	W0328 12:04:55.323547   18051 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:04:55.331494   18051 out.go:177] 
	W0328 12:04:55.335357   18051 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:04:55.335365   18051 out.go:239] * 
	* 
	W0328 12:04:55.336096   18051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:04:55.347479   18051 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-850000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-850000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-850000 version --output=json: exit status 1 (39.92025ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-850000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-28 12:04:55.397909 -0700 PDT m=+1020.155931793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-850000 -n kubernetes-upgrade-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-850000 -n kubernetes-upgrade-850000: exit status 7 (32.001834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-850000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-850000
--- FAIL: TestKubernetesUpgrade (18.77s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=17877
- KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1856288729/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=17877
- KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1424583537/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (582.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2999011245 start -p stopped-upgrade-732000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2999011245 start -p stopped-upgrade-732000 --memory=2200 --vm-driver=qemu2 : (47.915972667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2999011245 -p stopped-upgrade-732000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2999011245 -p stopped-upgrade-732000 stop: (12.119633083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-732000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-732000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.474176s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-732000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-732000" primary control-plane node in "stopped-upgrade-732000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-732000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:06:00.665317   18107 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:06:00.665464   18107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:06:00.665468   18107 out.go:304] Setting ErrFile to fd 2...
	I0328 12:06:00.665470   18107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:06:00.665641   18107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:06:00.666798   18107 out.go:298] Setting JSON to false
	I0328 12:06:00.686567   18107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11132,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:06:00.686639   18107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:06:00.691235   18107 out.go:177] * [stopped-upgrade-732000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:06:00.698280   18107 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:06:00.698303   18107 notify.go:220] Checking for updates...
	I0328 12:06:00.705192   18107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:06:00.708255   18107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:06:00.712256   18107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:06:00.715214   18107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:06:00.718279   18107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:06:00.721575   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:06:00.725208   18107 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 12:06:00.728217   18107 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:06:00.732252   18107 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:06:00.739279   18107 start.go:297] selected driver: qemu2
	I0328 12:06:00.739286   18107 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:00.739351   18107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:06:00.742286   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:06:00.742304   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:06:00.742337   18107 start.go:340] cluster config:
	{Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:00.742392   18107 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:06:00.754186   18107 out.go:177] * Starting "stopped-upgrade-732000" primary control-plane node in "stopped-upgrade-732000" cluster
	I0328 12:06:00.758249   18107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:06:00.758266   18107 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0328 12:06:00.758278   18107 cache.go:56] Caching tarball of preloaded images
	I0328 12:06:00.758333   18107 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:06:00.758340   18107 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0328 12:06:00.758394   18107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/config.json ...
	I0328 12:06:00.758940   18107 start.go:360] acquireMachinesLock for stopped-upgrade-732000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:06:00.758976   18107 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "stopped-upgrade-732000"
	I0328 12:06:00.758986   18107 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:06:00.758992   18107 fix.go:54] fixHost starting: 
	I0328 12:06:00.759127   18107 fix.go:112] recreateIfNeeded on stopped-upgrade-732000: state=Stopped err=<nil>
	W0328 12:06:00.759136   18107 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:06:00.766255   18107 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-732000" ...
	I0328 12:06:00.770235   18107 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53341-:22,hostfwd=tcp::53342-:2376,hostname=stopped-upgrade-732000 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/disk.qcow2
	I0328 12:06:00.820209   18107 main.go:141] libmachine: STDOUT: 
	I0328 12:06:00.820246   18107 main.go:141] libmachine: STDERR: 
	I0328 12:06:00.820252   18107 main.go:141] libmachine: Waiting for VM to start (ssh -p 53341 docker@127.0.0.1)...
	I0328 12:06:20.243958   18107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/config.json ...
	I0328 12:06:20.244401   18107 machine.go:94] provisionDockerMachine start ...
	I0328 12:06:20.244496   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.244752   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.244761   18107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 12:06:20.311434   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 12:06:20.311454   18107 buildroot.go:166] provisioning hostname "stopped-upgrade-732000"
	I0328 12:06:20.311536   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.311699   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.311710   18107 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-732000 && echo "stopped-upgrade-732000" | sudo tee /etc/hostname
	I0328 12:06:20.373379   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-732000
	
	I0328 12:06:20.373435   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.373550   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.373560   18107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-732000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-732000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-732000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 12:06:20.427356   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 12:06:20.427369   18107 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17877-15366/.minikube CaCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17877-15366/.minikube}
	I0328 12:06:20.427377   18107 buildroot.go:174] setting up certificates
	I0328 12:06:20.427381   18107 provision.go:84] configureAuth start
	I0328 12:06:20.427386   18107 provision.go:143] copyHostCerts
	I0328 12:06:20.427457   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem, removing ...
	I0328 12:06:20.427465   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem
	I0328 12:06:20.427571   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.pem (1078 bytes)
	I0328 12:06:20.427760   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem, removing ...
	I0328 12:06:20.427764   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem
	I0328 12:06:20.427814   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/cert.pem (1123 bytes)
	I0328 12:06:20.427922   18107 exec_runner.go:144] found /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem, removing ...
	I0328 12:06:20.427925   18107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem
	I0328 12:06:20.427970   18107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17877-15366/.minikube/key.pem (1675 bytes)
	I0328 12:06:20.428063   18107 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-732000 san=[127.0.0.1 localhost minikube stopped-upgrade-732000]
	I0328 12:06:20.524868   18107 provision.go:177] copyRemoteCerts
	I0328 12:06:20.524912   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 12:06:20.524922   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:20.553933   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 12:06:20.560654   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 12:06:20.567202   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 12:06:20.574363   18107 provision.go:87] duration metric: took 146.970958ms to configureAuth
	I0328 12:06:20.574372   18107 buildroot.go:189] setting minikube options for container-runtime
	I0328 12:06:20.574482   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:06:20.574514   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.574598   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.574602   18107 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 12:06:20.624666   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 12:06:20.624674   18107 buildroot.go:70] root file system type: tmpfs
	I0328 12:06:20.624723   18107 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 12:06:20.624772   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.624873   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.624905   18107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 12:06:20.680061   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 12:06:20.680110   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:20.680233   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:20.680241   18107 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 12:06:21.020265   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 12:06:21.020277   18107 machine.go:97] duration metric: took 775.858583ms to provisionDockerMachine
	I0328 12:06:21.020285   18107 start.go:293] postStartSetup for "stopped-upgrade-732000" (driver="qemu2")
	I0328 12:06:21.020297   18107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 12:06:21.020365   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 12:06:21.020374   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:21.050154   18107 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 12:06:21.051380   18107 info.go:137] Remote host: Buildroot 2021.02.12
	I0328 12:06:21.051387   18107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/addons for local assets ...
	I0328 12:06:21.051457   18107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17877-15366/.minikube/files for local assets ...
	I0328 12:06:21.051575   18107 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem -> 157842.pem in /etc/ssl/certs
	I0328 12:06:21.051714   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 12:06:21.054900   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:06:21.062004   18107 start.go:296] duration metric: took 41.708ms for postStartSetup
	I0328 12:06:21.062023   18107 fix.go:56] duration metric: took 20.302793542s for fixHost
	I0328 12:06:21.062060   18107 main.go:141] libmachine: Using SSH client type: native
	I0328 12:06:21.062161   18107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030ddbf0] 0x1030e0450 <nil>  [] 0s} localhost 53341 <nil> <nil>}
	I0328 12:06:21.062166   18107 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 12:06:21.111753   18107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711652781.220273837
	
	I0328 12:06:21.111759   18107 fix.go:216] guest clock: 1711652781.220273837
	I0328 12:06:21.111763   18107 fix.go:229] Guest: 2024-03-28 12:06:21.220273837 -0700 PDT Remote: 2024-03-28 12:06:21.062025 -0700 PDT m=+20.430843168 (delta=158.248837ms)
	I0328 12:06:21.111773   18107 fix.go:200] guest clock delta is within tolerance: 158.248837ms
	I0328 12:06:21.111776   18107 start.go:83] releasing machines lock for "stopped-upgrade-732000", held for 20.352556s
	I0328 12:06:21.111833   18107 ssh_runner.go:195] Run: cat /version.json
	I0328 12:06:21.111842   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:06:21.111836   18107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 12:06:21.111864   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	W0328 12:06:21.112410   18107 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53341: connect: connection refused
	I0328 12:06:21.112430   18107 retry.go:31] will retry after 286.462755ms: dial tcp [::1]:53341: connect: connection refused
	W0328 12:06:21.437591   18107 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0328 12:06:21.437734   18107 ssh_runner.go:195] Run: systemctl --version
	I0328 12:06:21.441155   18107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 12:06:21.443959   18107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 12:06:21.443999   18107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0328 12:06:21.448608   18107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0328 12:06:21.455709   18107 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 12:06:21.455722   18107 start.go:494] detecting cgroup driver to use...
	I0328 12:06:21.455849   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:06:21.465027   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0328 12:06:21.468525   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 12:06:21.471691   18107 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 12:06:21.471725   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 12:06:21.475126   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:06:21.478509   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 12:06:21.481954   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 12:06:21.484916   18107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 12:06:21.487680   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 12:06:21.491131   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 12:06:21.494575   18107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 12:06:21.497669   18107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 12:06:21.500190   18107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 12:06:21.502969   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:21.568101   18107 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 12:06:21.576363   18107 start.go:494] detecting cgroup driver to use...
	I0328 12:06:21.576449   18107 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 12:06:21.584491   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:06:21.589994   18107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 12:06:21.596025   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 12:06:21.600518   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 12:06:21.604924   18107 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 12:06:21.658661   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 12:06:21.664407   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 12:06:21.670230   18107 ssh_runner.go:195] Run: which cri-dockerd
	I0328 12:06:21.671614   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 12:06:21.674665   18107 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 12:06:21.679815   18107 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 12:06:21.743500   18107 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 12:06:21.810740   18107 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 12:06:21.810804   18107 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 12:06:21.816552   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:21.881766   18107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:06:23.036141   18107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154345708s)
	I0328 12:06:23.036220   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 12:06:23.041066   18107 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0328 12:06:23.048119   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:06:23.053133   18107 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 12:06:23.113534   18107 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 12:06:23.181372   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:23.243269   18107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 12:06:23.249606   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 12:06:23.253837   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:23.312385   18107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 12:06:23.354097   18107 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 12:06:23.354176   18107 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 12:06:23.356326   18107 start.go:562] Will wait 60s for crictl version
	I0328 12:06:23.356383   18107 ssh_runner.go:195] Run: which crictl
	I0328 12:06:23.357962   18107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 12:06:23.373315   18107 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0328 12:06:23.373400   18107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:06:23.390832   18107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 12:06:23.409967   18107 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0328 12:06:23.410095   18107 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0328 12:06:23.411568   18107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 12:06:23.415438   18107 kubeadm.go:877] updating cluster {Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0328 12:06:23.415479   18107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0328 12:06:23.415518   18107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:06:23.425951   18107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:06:23.425960   18107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:06:23.426008   18107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:06:23.428974   18107 ssh_runner.go:195] Run: which lz4
	I0328 12:06:23.430218   18107 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0328 12:06:23.431455   18107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 12:06:23.431465   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0328 12:06:24.112783   18107 docker.go:649] duration metric: took 682.588667ms to copy over tarball
	I0328 12:06:24.112843   18107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 12:06:25.284921   18107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.172049416s)
	I0328 12:06:25.284935   18107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 12:06:25.300671   18107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 12:06:25.303581   18107 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0328 12:06:25.309070   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:25.393766   18107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 12:06:27.003436   18107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.609614417s)
	I0328 12:06:27.003577   18107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 12:06:27.018806   18107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 12:06:27.018816   18107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0328 12:06:27.018821   18107 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 12:06:27.024630   18107 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0328 12:06:27.024672   18107 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:27.024733   18107 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:27.024770   18107 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:27.024811   18107 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:27.024811   18107 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:27.024868   18107 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:27.024913   18107 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:27.034610   18107 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:27.034814   18107 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:27.034921   18107 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:27.034944   18107 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:27.034988   18107 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0328 12:06:27.035041   18107 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:27.035168   18107 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:27.035471   18107 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0328 12:06:29.157477   18107 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0328 12:06:29.158054   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.191283   18107 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0328 12:06:29.191335   18107 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.191436   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0328 12:06:29.210898   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0328 12:06:29.211068   18107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:06:29.213570   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0328 12:06:29.213591   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0328 12:06:29.252302   18107 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0328 12:06:29.252322   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0328 12:06:29.284223   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.299671   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0328 12:06:29.299708   18107 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0328 12:06:29.299727   18107 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.299777   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0328 12:06:29.310344   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0328 12:06:29.318583   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.320683   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.321345   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.329442   18107 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0328 12:06:29.329469   18107 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.329517   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0328 12:06:29.335185   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.337599   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0328 12:06:29.343359   18107 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0328 12:06:29.343383   18107 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.343426   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0328 12:06:29.343460   18107 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0328 12:06:29.343470   18107 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.343498   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0328 12:06:29.349106   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0328 12:06:29.377318   18107 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0328 12:06:29.377339   18107 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.377347   18107 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0328 12:06:29.377358   18107 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0328 12:06:29.377390   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0328 12:06:29.377391   18107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0328 12:06:29.377396   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0328 12:06:29.377411   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0328 12:06:29.391958   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0328 12:06:29.391959   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0328 12:06:29.392067   18107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0328 12:06:29.393596   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0328 12:06:29.393608   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0328 12:06:29.401354   18107 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0328 12:06:29.401362   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0328 12:06:29.429855   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0328 12:06:29.653702   18107 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0328 12:06:29.653875   18107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.672066   18107 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0328 12:06:29.672107   18107 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.672185   18107 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:06:29.689373   18107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 12:06:29.689503   18107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:06:29.691144   18107 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0328 12:06:29.691166   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0328 12:06:29.719560   18107 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 12:06:29.719572   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0328 12:06:29.965500   18107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 12:06:29.965532   18107 cache_images.go:92] duration metric: took 2.94666975s to LoadCachedImages
	W0328 12:06:29.965572   18107 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0328 12:06:29.965578   18107 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0328 12:06:29.965637   18107 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-732000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 12:06:29.965708   18107 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 12:06:29.979119   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:06:29.979131   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:06:29.979136   18107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 12:06:29.979144   18107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-732000 NodeName:stopped-upgrade-732000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 12:06:29.979212   18107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-732000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 12:06:29.979271   18107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0328 12:06:29.982994   18107 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 12:06:29.983031   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 12:06:29.986304   18107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0328 12:06:29.991600   18107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 12:06:29.997215   18107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0328 12:06:30.003057   18107 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0328 12:06:30.004604   18107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 12:06:30.008348   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:06:30.073387   18107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:06:30.078536   18107 certs.go:68] Setting up /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000 for IP: 10.0.2.15
	I0328 12:06:30.078543   18107 certs.go:194] generating shared ca certs ...
	I0328 12:06:30.078551   18107 certs.go:226] acquiring lock for ca certs: {Name:mk77bea021df8758c6a5a63d76349b59be8fba89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.078739   18107 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key
	I0328 12:06:30.079067   18107 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key
	I0328 12:06:30.079076   18107 certs.go:256] generating profile certs ...
	I0328 12:06:30.079300   18107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key
	I0328 12:06:30.079316   18107 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c
	I0328 12:06:30.079326   18107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0328 12:06:30.232719   18107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c ...
	I0328 12:06:30.232735   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c: {Name:mk30d932ae259d9e0dca92c2d8cac201b1e35a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.233001   18107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c ...
	I0328 12:06:30.233008   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c: {Name:mk31e2e238e4451bd2cfc5bb7888ea8123fd1cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.233153   18107 certs.go:381] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt.dc73869c -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt
	I0328 12:06:30.233281   18107 certs.go:385] copying /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key.dc73869c -> /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key
	I0328 12:06:30.233615   18107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.key
	I0328 12:06:30.233799   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem (1338 bytes)
	W0328 12:06:30.233977   18107 certs.go:480] ignoring /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784_empty.pem, impossibly tiny 0 bytes
	I0328 12:06:30.233983   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 12:06:30.234001   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem (1078 bytes)
	I0328 12:06:30.234018   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem (1123 bytes)
	I0328 12:06:30.234039   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/key.pem (1675 bytes)
	I0328 12:06:30.234075   18107 certs.go:484] found cert: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem (1708 bytes)
	I0328 12:06:30.234382   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 12:06:30.241224   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 12:06:30.248116   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 12:06:30.255054   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 12:06:30.261817   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 12:06:30.268221   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 12:06:30.275160   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 12:06:30.282717   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 12:06:30.289917   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 12:06:30.296245   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/15784.pem --> /usr/share/ca-certificates/15784.pem (1338 bytes)
	I0328 12:06:30.303108   18107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/ssl/certs/157842.pem --> /usr/share/ca-certificates/157842.pem (1708 bytes)
	I0328 12:06:30.310566   18107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 12:06:30.315890   18107 ssh_runner.go:195] Run: openssl version
	I0328 12:06:30.317809   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/157842.pem && ln -fs /usr/share/ca-certificates/157842.pem /etc/ssl/certs/157842.pem"
	I0328 12:06:30.320686   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.322011   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 18:49 /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.322033   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/157842.pem
	I0328 12:06:30.323829   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/157842.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 12:06:30.327112   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 12:06:30.330289   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.331691   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 19:02 /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.331708   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 12:06:30.333438   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 12:06:30.336127   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15784.pem && ln -fs /usr/share/ca-certificates/15784.pem /etc/ssl/certs/15784.pem"
	I0328 12:06:30.339239   18107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.340660   18107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 18:49 /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.340682   18107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15784.pem
	I0328 12:06:30.342294   18107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15784.pem /etc/ssl/certs/51391683.0"
	I0328 12:06:30.345399   18107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 12:06:30.346684   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 12:06:30.349278   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 12:06:30.351361   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 12:06:30.353319   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 12:06:30.355093   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 12:06:30.356867   18107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 12:06:30.358548   18107 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53376 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 12:06:30.358616   18107 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:06:30.368242   18107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 12:06:30.371492   18107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 12:06:30.371499   18107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 12:06:30.371502   18107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 12:06:30.371541   18107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 12:06:30.374565   18107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 12:06:30.374962   18107 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-732000" does not appear in /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:06:30.375064   18107 kubeconfig.go:62] /Users/jenkins/minikube-integration/17877-15366/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-732000" cluster setting kubeconfig missing "stopped-upgrade-732000" context setting]
	I0328 12:06:30.375263   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:06:30.375691   18107 kapi.go:59] client config for stopped-upgrade-732000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043d2d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:06:30.376119   18107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 12:06:30.378851   18107 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-732000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0328 12:06:30.378855   18107 kubeadm.go:1154] stopping kube-system containers ...
	I0328 12:06:30.378890   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 12:06:30.389823   18107 docker.go:483] Stopping containers: [b4451a54079a 8610e5a378ef a4c23e1c3563 25f63db07e9f cde1338e3262 e22ff461ac53 63f4fd83f105 c91dd579012c]
	I0328 12:06:30.389885   18107 ssh_runner.go:195] Run: docker stop b4451a54079a 8610e5a378ef a4c23e1c3563 25f63db07e9f cde1338e3262 e22ff461ac53 63f4fd83f105 c91dd579012c
	I0328 12:06:30.400156   18107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 12:06:30.406051   18107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:06:30.408836   18107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:06:30.408841   18107 kubeadm.go:156] found existing configuration files:
	
	I0328 12:06:30.408863   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf
	I0328 12:06:30.411432   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:06:30.411454   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:06:30.414514   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf
	I0328 12:06:30.417207   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:06:30.417230   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:06:30.419651   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf
	I0328 12:06:30.422997   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:06:30.423021   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:06:30.426123   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf
	I0328 12:06:30.428548   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:06:30.428573   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:06:30.431304   18107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:06:30.434435   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:30.459771   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.482598   18107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.022798958s)
	I0328 12:06:31.482612   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.599992   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.625104   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 12:06:31.648475   18107 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:06:31.648548   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.150195   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.650641   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:06:32.655461   18107 api_server.go:72] duration metric: took 1.006974833s to wait for apiserver process to appear ...
	I0328 12:06:32.655471   18107 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:06:32.655479   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:37.657658   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:37.657675   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:42.657995   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:42.658070   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:47.658761   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:47.658807   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:52.659474   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:52.659614   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:06:57.660833   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:06:57.660898   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:02.662188   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:02.662241   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:07.663695   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:07.663807   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:12.666448   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:12.666493   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:17.668794   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:17.668815   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:22.671133   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:22.671218   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:27.672611   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:27.672672   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:32.676587   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:32.676977   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:32.707005   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:32.707133   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:32.724954   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:32.725050   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:32.738370   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:32.738439   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:32.749947   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:32.750015   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:32.760483   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:32.760560   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:32.771157   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:32.771223   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:32.785872   18107 logs.go:276] 0 containers: []
	W0328 12:07:32.785888   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:32.785950   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:32.796459   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:32.796484   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:32.796489   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:32.807026   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:32.807036   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:32.832096   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:32.832103   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:32.938604   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:32.938617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:32.952769   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:32.952778   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:32.968025   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:32.968035   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:32.990137   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:32.990147   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:33.007317   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:33.007326   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:33.019122   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:33.019134   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:33.057154   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:33.057161   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:33.074717   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:33.074730   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:33.086573   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:33.086584   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:33.101358   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:33.101370   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:33.114813   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:33.114824   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:33.119080   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:33.119093   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:33.130700   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:33.130709   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:35.658264   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:40.658575   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:40.658791   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:40.671888   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:40.671973   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:40.682953   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:40.683027   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:40.695053   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:40.695128   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:40.705413   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:40.705479   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:40.716124   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:40.716202   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:40.726876   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:40.726949   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:40.737059   18107 logs.go:276] 0 containers: []
	W0328 12:07:40.737068   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:40.737131   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:40.747554   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:40.747569   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:40.747574   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:40.766308   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:40.766321   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:40.777627   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:40.777637   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:40.818914   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:40.818925   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:40.830770   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:40.830783   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:40.847289   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:40.847299   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:40.871295   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:40.871305   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:40.890329   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:40.890340   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:40.904371   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:40.904382   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:40.917228   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:40.917240   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:40.936168   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:40.936188   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:40.978096   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:40.978110   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:40.982230   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:40.982237   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:40.996080   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:40.996090   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:41.010202   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:41.010215   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:41.024684   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:41.024697   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:43.552925   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:48.555757   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:48.556127   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:48.601840   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:48.601994   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:48.624628   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:48.624716   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:48.637401   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:48.637475   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:48.649032   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:48.649099   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:48.659627   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:48.659695   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:48.670134   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:48.670195   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:48.680882   18107 logs.go:276] 0 containers: []
	W0328 12:07:48.680894   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:48.680955   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:48.691326   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:48.691342   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:48.691348   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:48.702637   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:48.702647   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:48.713680   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:48.713692   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:48.728729   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:48.728740   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:48.733205   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:48.733214   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:48.756509   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:48.756520   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:48.794097   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:48.794118   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:48.819672   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:48.819683   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:48.834302   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:48.834316   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:48.845762   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:48.845773   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:48.864329   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:48.864341   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:48.882229   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:48.882238   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:48.895686   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:48.895699   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:48.910009   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:48.910019   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:48.926699   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:48.926711   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:48.938364   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:48.938374   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:51.476772   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:07:56.479122   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:07:56.479247   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:07:56.493458   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:07:56.493532   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:07:56.504402   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:07:56.504466   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:07:56.515210   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:07:56.515288   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:07:56.525429   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:07:56.525502   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:07:56.535633   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:07:56.535719   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:07:56.554090   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:07:56.554156   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:07:56.567303   18107 logs.go:276] 0 containers: []
	W0328 12:07:56.567313   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:07:56.567371   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:07:56.577974   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:07:56.577990   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:07:56.577996   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:07:56.595037   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:07:56.595050   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:07:56.606782   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:07:56.606792   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:07:56.642602   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:07:56.642609   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:07:56.646514   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:07:56.646522   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:07:56.660867   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:07:56.660877   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:07:56.672769   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:07:56.672782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:07:56.687418   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:07:56.687432   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:07:56.698575   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:07:56.698587   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:07:56.710235   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:07:56.710247   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:07:56.744752   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:07:56.744766   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:07:56.762793   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:07:56.762803   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:07:56.774331   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:07:56.774341   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:07:56.797424   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:07:56.797430   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:07:56.811772   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:07:56.811782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:07:56.836955   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:07:56.836968   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:07:59.357296   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:04.359769   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:04.360009   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:04.388003   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:04.388124   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:04.405130   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:04.405220   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:04.418536   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:04.418612   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:04.429673   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:04.429743   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:04.439946   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:04.440015   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:04.450395   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:04.450464   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:04.465884   18107 logs.go:276] 0 containers: []
	W0328 12:08:04.465898   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:04.465958   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:04.476448   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:04.476466   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:04.476472   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:04.490547   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:04.490558   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:04.501963   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:04.501973   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:04.519470   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:04.519483   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:04.532464   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:04.532474   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:04.567111   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:04.567123   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:04.590654   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:04.590662   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:04.606493   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:04.606504   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:04.644777   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:04.644788   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:04.649329   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:04.649336   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:04.664558   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:04.664572   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:04.676046   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:04.676060   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:04.691296   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:04.691307   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:04.716033   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:04.716045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:04.730867   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:04.730878   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:04.742883   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:04.742894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:07.256509   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:12.258926   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:12.259043   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:12.270068   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:12.270148   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:12.281417   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:12.281489   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:12.291870   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:12.291933   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:12.302599   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:12.302666   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:12.312578   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:12.312654   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:12.323082   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:12.323152   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:12.333564   18107 logs.go:276] 0 containers: []
	W0328 12:08:12.333575   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:12.333635   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:12.344306   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:12.344323   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:12.344329   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:12.379426   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:12.379440   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:12.394153   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:12.394162   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:12.405638   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:12.405652   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:12.429894   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:12.429901   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:12.448758   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:12.448771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:12.466761   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:12.466771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:12.484085   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:12.484097   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:12.495166   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:12.495177   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:12.532877   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:12.532888   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:12.537199   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:12.537205   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:12.548333   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:12.548344   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:12.562703   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:12.562714   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:12.584609   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:12.584620   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:12.609787   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:12.609798   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:12.621308   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:12.621317   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:15.135324   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:20.137641   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:20.137817   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:20.154639   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:20.154719   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:20.167604   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:20.167676   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:20.178204   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:20.178277   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:20.188409   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:20.188474   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:20.199407   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:20.199477   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:20.213685   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:20.213757   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:20.223473   18107 logs.go:276] 0 containers: []
	W0328 12:08:20.223484   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:20.223538   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:20.238302   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:20.238318   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:20.238323   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:20.242346   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:20.242359   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:20.266879   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:20.266890   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:20.278035   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:20.278045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:20.293665   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:20.293677   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:20.308013   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:20.308023   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:20.325827   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:20.325837   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:20.363955   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:20.363964   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:20.398561   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:20.398573   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:20.412238   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:20.412253   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:20.426804   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:20.426813   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:20.439689   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:20.439699   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:20.453397   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:20.453413   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:20.468320   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:20.468330   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:20.479684   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:20.479694   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:20.493125   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:20.493134   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:23.018336   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:28.020663   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:28.020855   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:28.045507   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:28.045635   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:28.061144   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:28.061237   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:28.074084   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:28.074157   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:28.085318   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:28.085391   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:28.095690   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:28.095757   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:28.106274   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:28.106343   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:28.116512   18107 logs.go:276] 0 containers: []
	W0328 12:08:28.116527   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:28.116589   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:28.127100   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:28.127117   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:28.127122   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:28.145698   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:28.145710   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:28.167499   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:28.167510   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:28.202631   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:28.202642   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:28.215489   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:28.215500   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:28.238190   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:28.238201   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:28.249368   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:28.249377   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:28.272557   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:28.272565   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:28.284158   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:28.284169   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:28.288840   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:28.288847   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:28.302894   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:28.302905   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:28.327717   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:28.327728   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:28.339134   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:28.339143   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:28.357271   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:28.357281   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:28.370844   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:28.370854   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:28.409145   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:28.409158   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:30.929275   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:35.929976   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:35.930184   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:35.955168   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:35.955275   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:35.972398   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:35.972479   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:35.986653   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:35.986729   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:35.998488   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:35.998559   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:36.009275   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:36.009345   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:36.020683   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:36.020750   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:36.030895   18107 logs.go:276] 0 containers: []
	W0328 12:08:36.030907   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:36.030967   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:36.041175   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:36.041193   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:36.041198   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:36.055096   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:36.055110   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:36.086681   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:36.086692   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:36.104135   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:36.104146   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:36.117454   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:36.117464   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:36.130006   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:36.130017   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:36.164851   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:36.164863   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:36.179573   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:36.179585   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:36.197854   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:36.197864   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:36.212692   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:36.212702   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:36.224287   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:36.224300   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:36.236127   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:36.236137   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:36.247540   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:36.247554   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:36.270584   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:36.270592   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:36.307982   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:36.307989   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:36.312076   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:36.312082   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:38.827505   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:43.830059   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:43.830286   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:43.860047   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:43.860148   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:43.880723   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:43.880797   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:43.894127   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:43.894198   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:43.904114   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:43.904182   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:43.914833   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:43.914907   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:43.925756   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:43.925823   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:43.936434   18107 logs.go:276] 0 containers: []
	W0328 12:08:43.936446   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:43.936497   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:43.946581   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:43.946598   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:43.946604   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:43.957650   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:43.957662   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:43.969833   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:43.969842   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:43.985238   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:43.985249   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:43.999233   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:43.999244   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:44.003471   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:44.003477   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:44.028665   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:44.028676   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:44.042769   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:44.042782   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:44.054669   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:44.054681   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:44.092031   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:44.092041   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:44.106162   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:44.106172   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:44.129887   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:44.129897   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:44.141853   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:44.141864   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:44.176884   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:44.176896   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:44.190996   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:44.191007   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:44.208700   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:44.208712   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:46.727222   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:51.728740   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:51.729117   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:51.766371   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:51.766500   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:51.791082   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:51.791181   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:51.805924   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:51.805995   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:51.818765   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:51.818834   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:51.830750   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:51.830820   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:51.847498   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:51.847564   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:51.857599   18107 logs.go:276] 0 containers: []
	W0328 12:08:51.857610   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:51.857670   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:51.867773   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:51.867790   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:51.867796   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:51.883060   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:51.883072   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:51.902035   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:51.902045   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:51.914042   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:08:51.914057   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:08:51.938709   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:51.938717   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:51.968544   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:51.968554   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:51.980609   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:51.980619   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:51.992541   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:51.992552   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:51.996609   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:51.996617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:52.021734   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:52.021744   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:52.033440   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:52.033450   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:52.050486   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:08:52.050499   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:08:52.086711   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:52.086721   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:52.101206   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:52.101219   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:08:52.114955   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:52.114968   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:52.151899   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:08:52.151910   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:08:54.669190   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:08:59.669864   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:08:59.670079   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:08:59.697542   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:08:59.697665   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:08:59.720990   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:08:59.721070   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:08:59.733957   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:08:59.734017   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:08:59.745447   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:08:59.745518   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:08:59.756150   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:08:59.756224   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:08:59.767167   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:08:59.767235   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:08:59.777380   18107 logs.go:276] 0 containers: []
	W0328 12:08:59.777391   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:08:59.777445   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:08:59.789182   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:08:59.789201   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:08:59.789206   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:08:59.800834   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:08:59.800844   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:08:59.818750   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:08:59.818761   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:08:59.836933   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:08:59.836943   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:08:59.849011   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:08:59.849022   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:08:59.864239   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:08:59.864251   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:08:59.880564   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:08:59.880575   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:08:59.917551   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:08:59.917559   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:08:59.928983   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:08:59.928993   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:08:59.943518   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:08:59.943527   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:08:59.948979   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:08:59.948987   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:08:59.973815   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:08:59.973828   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:08:59.992421   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:08:59.992431   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:00.009676   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:00.009686   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:00.033980   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:00.033990   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:00.070114   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:00.070128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:02.586032   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:07.588397   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:07.588611   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:07.614538   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:07.614628   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:07.628087   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:07.628157   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:07.642627   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:07.642699   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:07.653122   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:07.653195   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:07.664007   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:07.664075   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:07.674314   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:07.674386   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:07.690814   18107 logs.go:276] 0 containers: []
	W0328 12:09:07.690884   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:07.690949   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:07.701662   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:07.701682   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:07.701687   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:07.739834   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:07.739843   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:07.744152   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:07.744160   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:07.758551   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:07.758566   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:07.784125   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:07.784135   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:07.799456   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:07.799466   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:07.814509   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:07.814519   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:07.837871   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:07.837878   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:07.849223   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:07.849234   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:07.883674   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:07.883686   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:07.897523   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:07.897532   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:07.912387   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:07.912397   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:07.929606   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:07.929617   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:07.942464   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:07.942474   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:07.953855   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:07.953866   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:07.968284   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:07.968296   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:10.482813   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:15.484240   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:15.484415   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:15.502376   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:15.502451   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:15.516267   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:15.516343   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:15.528924   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:15.528991   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:15.539465   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:15.539531   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:15.550073   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:15.550137   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:15.560487   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:15.560552   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:15.570583   18107 logs.go:276] 0 containers: []
	W0328 12:09:15.570594   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:15.570645   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:15.581115   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:15.581142   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:15.581150   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:15.585153   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:15.585161   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:15.599076   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:15.599087   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:15.610685   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:15.610697   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:15.628527   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:15.628537   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:15.662954   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:15.662965   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:15.688387   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:15.688397   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:15.713473   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:15.713483   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:15.732374   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:15.732385   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:15.744275   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:15.744286   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:15.768329   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:15.768339   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:15.779343   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:15.779354   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:15.796829   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:15.796838   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:15.833142   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:15.833150   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:15.846718   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:15.846728   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:15.858710   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:15.858721   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:18.373071   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:23.375532   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:23.375764   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:23.408335   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:23.408440   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:23.434229   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:23.434308   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:23.450485   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:23.450550   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:23.465340   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:23.465413   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:23.476246   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:23.476315   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:23.487255   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:23.487321   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:23.497150   18107 logs.go:276] 0 containers: []
	W0328 12:09:23.497162   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:23.497219   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:23.507678   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:23.507695   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:23.507701   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:23.543000   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:23.543013   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:23.567453   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:23.567466   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:23.581866   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:23.581876   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:23.593467   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:23.593478   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:23.606889   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:23.606899   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:23.620790   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:23.620800   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:23.642451   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:23.642460   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:23.654634   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:23.654645   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:23.677297   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:23.677305   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:23.713490   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:23.713500   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:23.717456   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:23.717464   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:23.737818   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:23.737830   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:23.749176   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:23.749188   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:23.762534   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:23.762548   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:23.773645   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:23.773658   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:26.289669   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:31.292092   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:31.292221   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:31.304576   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:31.304656   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:31.322353   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:31.322427   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:31.332700   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:31.332775   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:31.342935   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:31.343010   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:31.362088   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:31.362164   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:31.374304   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:31.374375   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:31.384042   18107 logs.go:276] 0 containers: []
	W0328 12:09:31.384054   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:31.384117   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:31.394336   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:31.394352   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:31.394358   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:31.408244   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:31.408255   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:31.420106   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:31.420116   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:31.437885   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:31.437894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:31.451291   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:31.451301   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:31.468191   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:31.468202   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:31.505164   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:31.505176   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:31.539249   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:31.539260   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:31.558416   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:31.558429   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:31.570972   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:31.570984   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:31.588556   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:31.588570   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:31.592993   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:31.592999   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:31.608394   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:31.608408   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:31.634795   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:31.634809   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:31.649096   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:31.649106   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:31.660562   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:31.660572   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:34.186045   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:39.188596   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:39.188915   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:39.222986   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:39.223118   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:39.242598   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:39.242688   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:39.257064   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:39.257143   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:39.269152   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:39.269219   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:39.280181   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:39.280241   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:39.291157   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:39.291227   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:39.301668   18107 logs.go:276] 0 containers: []
	W0328 12:09:39.301679   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:39.301737   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:39.312039   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:39.312054   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:39.312059   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:39.325908   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:39.325919   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:39.351508   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:39.351520   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:39.365200   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:39.365210   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:39.388277   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:39.388286   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:39.416747   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:39.416760   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:39.429987   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:39.429997   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:39.466336   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:39.466345   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:39.470077   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:39.470083   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:39.483592   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:39.483606   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:39.500309   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:39.500320   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:39.516219   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:39.516232   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:39.550972   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:39.550983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:39.571361   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:39.571372   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:39.586613   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:39.586631   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:39.601758   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:39.601769   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:42.115864   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:47.117311   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:47.117463   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:47.128951   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:47.129029   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:47.139843   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:47.139916   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:47.153349   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:47.153418   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:47.164138   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:47.164211   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:47.180491   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:47.180560   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:47.191693   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:47.191759   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:47.202072   18107 logs.go:276] 0 containers: []
	W0328 12:09:47.202085   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:47.202145   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:47.213416   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:47.213432   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:47.213438   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:47.218211   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:47.218222   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:47.251235   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:47.251247   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:47.266137   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:47.266148   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:47.284674   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:47.284685   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:47.308353   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:47.308361   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:47.320703   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:47.320714   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:47.346186   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:47.346195   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:47.359648   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:47.359659   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:47.373514   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:47.373526   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:47.385486   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:47.385496   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:47.399614   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:47.399626   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:47.411444   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:47.411453   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:47.447808   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:47.447819   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:47.461929   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:47.461942   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:47.473153   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:47.473164   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:49.986881   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:09:54.989256   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:09:54.989446   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:09:55.001668   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:09:55.001746   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:09:55.012531   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:09:55.012602   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:09:55.023575   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:09:55.023650   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:09:55.038465   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:09:55.038542   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:09:55.049108   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:09:55.049177   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:09:55.059794   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:09:55.059865   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:09:55.070297   18107 logs.go:276] 0 containers: []
	W0328 12:09:55.070312   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:09:55.070373   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:09:55.081397   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:09:55.081414   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:09:55.081420   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:09:55.117941   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:09:55.117950   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:09:55.134838   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:09:55.134849   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:09:55.146023   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:09:55.146033   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:09:55.159662   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:09:55.159675   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:09:55.199503   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:09:55.199514   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:09:55.224013   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:09:55.224025   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:09:55.237863   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:09:55.237873   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:09:55.249456   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:09:55.249466   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:09:55.253772   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:09:55.253778   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:09:55.267252   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:09:55.267262   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:09:55.290636   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:09:55.290643   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:09:55.307731   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:09:55.307746   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:09:55.330514   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:09:55.330525   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:09:55.356809   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:09:55.356819   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:09:55.369318   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:09:55.369332   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:09:57.888813   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:02.891193   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:02.891336   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:02.904268   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:02.904333   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:02.915174   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:02.915237   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:02.925777   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:02.925843   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:02.936205   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:02.936273   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:02.946716   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:02.946782   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:02.957015   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:02.957079   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:02.975308   18107 logs.go:276] 0 containers: []
	W0328 12:10:02.975317   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:02.975367   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:02.985730   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:02.985748   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:02.985754   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:02.997147   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:02.997160   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:03.011321   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:03.011330   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:03.036503   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:03.036514   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:03.059228   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:03.059237   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:03.077457   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:03.077468   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:03.095714   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:03.095724   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:03.107599   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:03.107610   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:03.118873   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:03.118883   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:03.123066   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:03.123072   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:03.139921   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:03.139931   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:03.154762   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:03.154773   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:03.169380   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:03.169390   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:03.181576   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:03.181587   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:03.218470   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:03.218481   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:03.252443   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:03.252454   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:05.764840   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:10.767286   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:10.767498   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:10.791816   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:10.791894   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:10.804788   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:10.804858   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:10.816299   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:10.816371   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:10.826967   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:10.827038   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:10.840285   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:10.840347   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:10.850942   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:10.851004   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:10.860932   18107 logs.go:276] 0 containers: []
	W0328 12:10:10.860941   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:10.860989   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:10.871735   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:10.871751   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:10.871756   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:10.885117   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:10.885128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:10.895971   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:10.895983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:10.907405   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:10.907415   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:10.921730   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:10.921742   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:10.936550   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:10.936559   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:10.948200   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:10.948209   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:10.964943   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:10.964953   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:10.979549   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:10.979558   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:10.983977   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:10.983983   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:11.013023   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:11.013035   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:11.025228   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:11.025238   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:11.048275   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:11.048284   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:11.060343   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:11.060354   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:11.099104   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:11.099114   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:11.133331   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:11.133341   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:13.650118   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:18.652816   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:18.653019   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:18.677171   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:18.677304   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:18.694614   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:18.694694   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:18.707761   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:18.707835   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:18.719327   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:18.719397   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:18.729474   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:18.729541   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:18.740284   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:18.740347   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:18.751551   18107 logs.go:276] 0 containers: []
	W0328 12:10:18.751561   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:18.751621   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:18.762580   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:18.762597   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:18.762602   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:18.785483   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:18.785494   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:18.797563   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:18.797573   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:18.801587   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:18.801596   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:18.812881   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:18.812894   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:18.829872   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:18.829884   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:18.841198   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:18.841208   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:18.854776   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:18.854785   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:18.869578   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:18.869589   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:18.884486   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:18.884496   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:18.920323   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:18.920335   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:18.937707   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:18.937721   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:18.951780   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:18.951794   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:18.989559   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:18.989567   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:19.004293   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:19.004303   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:19.029675   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:19.029686   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:21.546733   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:26.547922   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:26.548183   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:10:26.574772   18107 logs.go:276] 2 containers: [31ac8c0b33dd cde1338e3262]
	I0328 12:10:26.574909   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:10:26.593587   18107 logs.go:276] 2 containers: [3ae1821f1ee7 a4c23e1c3563]
	I0328 12:10:26.593664   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:10:26.606728   18107 logs.go:276] 1 containers: [7d5618b87c9e]
	I0328 12:10:26.606807   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:10:26.618226   18107 logs.go:276] 2 containers: [e4a233a9548d b4451a54079a]
	I0328 12:10:26.618304   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:10:26.628770   18107 logs.go:276] 1 containers: [5f1339913960]
	I0328 12:10:26.628842   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:10:26.638830   18107 logs.go:276] 2 containers: [241f3f92c6af 25f63db07e9f]
	I0328 12:10:26.638903   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:10:26.649320   18107 logs.go:276] 0 containers: []
	W0328 12:10:26.649330   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:10:26.649387   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:10:26.663127   18107 logs.go:276] 1 containers: [c27d88e957e1]
	I0328 12:10:26.663145   18107 logs.go:123] Gathering logs for kube-apiserver [31ac8c0b33dd] ...
	I0328 12:10:26.663151   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31ac8c0b33dd"
	I0328 12:10:26.677985   18107 logs.go:123] Gathering logs for coredns [7d5618b87c9e] ...
	I0328 12:10:26.677995   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5618b87c9e"
	I0328 12:10:26.691357   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:10:26.691369   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:10:26.726045   18107 logs.go:123] Gathering logs for kube-scheduler [e4a233a9548d] ...
	I0328 12:10:26.726056   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a233a9548d"
	I0328 12:10:26.744829   18107 logs.go:123] Gathering logs for kube-controller-manager [241f3f92c6af] ...
	I0328 12:10:26.744839   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241f3f92c6af"
	I0328 12:10:26.765571   18107 logs.go:123] Gathering logs for kube-controller-manager [25f63db07e9f] ...
	I0328 12:10:26.765582   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25f63db07e9f"
	I0328 12:10:26.787122   18107 logs.go:123] Gathering logs for etcd [a4c23e1c3563] ...
	I0328 12:10:26.787135   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c23e1c3563"
	I0328 12:10:26.802042   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:10:26.802053   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:10:26.806510   18107 logs.go:123] Gathering logs for etcd [3ae1821f1ee7] ...
	I0328 12:10:26.806521   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ae1821f1ee7"
	I0328 12:10:26.821074   18107 logs.go:123] Gathering logs for kube-scheduler [b4451a54079a] ...
	I0328 12:10:26.821088   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4451a54079a"
	I0328 12:10:26.835807   18107 logs.go:123] Gathering logs for storage-provisioner [c27d88e957e1] ...
	I0328 12:10:26.835820   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27d88e957e1"
	I0328 12:10:26.849286   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:10:26.849301   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:10:26.886165   18107 logs.go:123] Gathering logs for kube-proxy [5f1339913960] ...
	I0328 12:10:26.886174   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1339913960"
	I0328 12:10:26.897910   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:10:26.897920   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:10:26.921142   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:10:26.921155   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:10:26.933232   18107 logs.go:123] Gathering logs for kube-apiserver [cde1338e3262] ...
	I0328 12:10:26.933242   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cde1338e3262"
	I0328 12:10:29.460393   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:34.463155   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:34.463374   18107 kubeadm.go:591] duration metric: took 4m4.088980625s to restartPrimaryControlPlane
	W0328 12:10:34.463516   18107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 12:10:34.463591   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0328 12:10:35.488825   18107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.025207s)
	I0328 12:10:35.488889   18107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 12:10:35.493871   18107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 12:10:35.496735   18107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 12:10:35.499376   18107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 12:10:35.499382   18107 kubeadm.go:156] found existing configuration files:
	
	I0328 12:10:35.499403   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf
	I0328 12:10:35.501921   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 12:10:35.501945   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 12:10:35.505167   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf
	I0328 12:10:35.508497   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 12:10:35.508526   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 12:10:35.511297   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf
	I0328 12:10:35.513847   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 12:10:35.513868   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 12:10:35.517046   18107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf
	I0328 12:10:35.519800   18107 kubeadm.go:162] "https://control-plane.minikube.internal:53376" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53376 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 12:10:35.519826   18107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 12:10:35.522352   18107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 12:10:35.539678   18107 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0328 12:10:35.539727   18107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 12:10:35.591150   18107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 12:10:35.591204   18107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 12:10:35.591267   18107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 12:10:35.639466   18107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 12:10:35.642685   18107 out.go:204]   - Generating certificates and keys ...
	I0328 12:10:35.642718   18107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 12:10:35.642749   18107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 12:10:35.642786   18107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 12:10:35.642817   18107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 12:10:35.642859   18107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 12:10:35.642888   18107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 12:10:35.642921   18107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 12:10:35.642958   18107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 12:10:35.643001   18107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 12:10:35.643040   18107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 12:10:35.643057   18107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 12:10:35.643087   18107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 12:10:35.675910   18107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 12:10:35.745597   18107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 12:10:35.800624   18107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 12:10:35.840594   18107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 12:10:35.869610   18107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 12:10:35.870096   18107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 12:10:35.870121   18107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 12:10:35.939062   18107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 12:10:35.942555   18107 out.go:204]   - Booting up control plane ...
	I0328 12:10:35.942599   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 12:10:35.942654   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 12:10:35.942690   18107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 12:10:35.942729   18107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 12:10:35.942808   18107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 12:10:40.442192   18107 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501072 seconds
	I0328 12:10:40.442283   18107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 12:10:40.446912   18107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 12:10:40.959368   18107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 12:10:40.959562   18107 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-732000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 12:10:41.463903   18107 kubeadm.go:309] [bootstrap-token] Using token: c3fq2i.3w6j4tvs3qwbbusu
	I0328 12:10:41.470174   18107 out.go:204]   - Configuring RBAC rules ...
	I0328 12:10:41.470237   18107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 12:10:41.470289   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 12:10:41.475836   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 12:10:41.476787   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 12:10:41.477539   18107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 12:10:41.478396   18107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 12:10:41.481418   18107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 12:10:41.640985   18107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 12:10:41.868050   18107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 12:10:41.868466   18107 kubeadm.go:309] 
	I0328 12:10:41.868500   18107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 12:10:41.868503   18107 kubeadm.go:309] 
	I0328 12:10:41.868538   18107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 12:10:41.868551   18107 kubeadm.go:309] 
	I0328 12:10:41.868569   18107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 12:10:41.868603   18107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 12:10:41.868634   18107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 12:10:41.868637   18107 kubeadm.go:309] 
	I0328 12:10:41.868662   18107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 12:10:41.868666   18107 kubeadm.go:309] 
	I0328 12:10:41.868691   18107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 12:10:41.868695   18107 kubeadm.go:309] 
	I0328 12:10:41.868721   18107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 12:10:41.868758   18107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 12:10:41.868804   18107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 12:10:41.868810   18107 kubeadm.go:309] 
	I0328 12:10:41.868856   18107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 12:10:41.868901   18107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 12:10:41.868905   18107 kubeadm.go:309] 
	I0328 12:10:41.868949   18107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token c3fq2i.3w6j4tvs3qwbbusu \
	I0328 12:10:41.869010   18107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b \
	I0328 12:10:41.869020   18107 kubeadm.go:309] 	--control-plane 
	I0328 12:10:41.869024   18107 kubeadm.go:309] 
	I0328 12:10:41.869071   18107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 12:10:41.869073   18107 kubeadm.go:309] 
	I0328 12:10:41.869130   18107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token c3fq2i.3w6j4tvs3qwbbusu \
	I0328 12:10:41.869180   18107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:20869415dc16efafc1959a6456df40d4e2e2965c748cb8825bf51e742e13ba7b 
	I0328 12:10:41.869307   18107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 12:10:41.869384   18107 cni.go:84] Creating CNI manager for ""
	I0328 12:10:41.869393   18107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:10:41.871085   18107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 12:10:41.877794   18107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 12:10:41.880824   18107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 12:10:41.885518   18107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 12:10:41.885562   18107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 12:10:41.885576   18107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-732000 minikube.k8s.io/updated_at=2024_03_28T12_10_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967 minikube.k8s.io/name=stopped-upgrade-732000 minikube.k8s.io/primary=true
	I0328 12:10:41.888571   18107 ops.go:34] apiserver oom_adj: -16
	I0328 12:10:41.935865   18107 kubeadm.go:1107] duration metric: took 50.337416ms to wait for elevateKubeSystemPrivileges
	W0328 12:10:41.935885   18107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 12:10:41.935888   18107 kubeadm.go:393] duration metric: took 4m11.574379s to StartCluster
	I0328 12:10:41.935898   18107 settings.go:142] acquiring lock: {Name:mkfc1d043149af7cff65561e827dba55cefba229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:10:41.935986   18107 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:10:41.936410   18107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/kubeconfig: {Name:mk8ceaf6085ee220c9fe396e9688a488924a6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:10:41.936611   18107 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:10:41.940550   18107 out.go:177] * Verifying Kubernetes components...
	I0328 12:10:41.936623   18107 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 12:10:41.936692   18107 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:10:41.948780   18107 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-732000"
	I0328 12:10:41.948794   18107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 12:10:41.948800   18107 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-732000"
	W0328 12:10:41.948803   18107 addons.go:243] addon storage-provisioner should already be in state true
	I0328 12:10:41.948797   18107 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-732000"
	I0328 12:10:41.948831   18107 host.go:66] Checking if "stopped-upgrade-732000" exists ...
	I0328 12:10:41.948833   18107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-732000"
	I0328 12:10:41.952756   18107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 12:10:41.955793   18107 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:10:41.955799   18107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 12:10:41.955805   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:10:41.956901   18107 kapi.go:59] client config for stopped-upgrade-732000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/stopped-upgrade-732000/client.key", CAFile:"/Users/jenkins/minikube-integration/17877-15366/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043d2d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 12:10:41.957019   18107 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-732000"
	W0328 12:10:41.957026   18107 addons.go:243] addon default-storageclass should already be in state true
	I0328 12:10:41.957036   18107 host.go:66] Checking if "stopped-upgrade-732000" exists ...
	I0328 12:10:41.958039   18107 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 12:10:41.958054   18107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 12:10:41.958066   18107 sshutil.go:53] new ssh client: &{IP:localhost Port:53341 SSHKeyPath:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/stopped-upgrade-732000/id_rsa Username:docker}
	I0328 12:10:42.025108   18107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 12:10:42.030317   18107 api_server.go:52] waiting for apiserver process to appear ...
	I0328 12:10:42.030356   18107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 12:10:42.034239   18107 api_server.go:72] duration metric: took 97.616625ms to wait for apiserver process to appear ...
	I0328 12:10:42.034247   18107 api_server.go:88] waiting for apiserver healthz status ...
	I0328 12:10:42.034254   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:42.065128   18107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 12:10:42.079949   18107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 12:10:47.036378   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:47.036404   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:52.036698   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:52.036738   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:10:57.037075   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:10:57.037114   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:02.037509   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:02.037524   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:07.038010   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:07.038034   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:12.038674   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:12.038716   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0328 12:11:12.446155   18107 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0328 12:11:12.450568   18107 out.go:177] * Enabled addons: storage-provisioner
	I0328 12:11:12.459521   18107 addons.go:505] duration metric: took 30.522538917s for enable addons: enabled=[storage-provisioner]
	I0328 12:11:17.039955   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:17.039999   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:22.041216   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:22.041240   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:27.042593   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:27.042615   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:32.044250   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:32.044289   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:37.046619   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:37.046680   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:42.048946   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:42.049064   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:42.067282   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:11:42.067362   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:42.089437   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:11:42.089507   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:42.100861   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:11:42.100931   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:42.112310   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:11:42.112373   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:42.130324   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:11:42.130397   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:42.141644   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:11:42.141710   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:42.152480   18107 logs.go:276] 0 containers: []
	W0328 12:11:42.152490   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:42.152547   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:42.163168   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:11:42.163179   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:11:42.163185   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:11:42.175511   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:11:42.175523   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:11:42.190737   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:11:42.190745   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:11:42.211777   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:11:42.211798   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:11:42.228456   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:42.228472   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:42.257892   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:11:42.257916   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:42.276021   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:11:42.276035   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:11:42.291453   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:42.291466   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:42.333157   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:42.333170   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:42.339508   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:42.339527   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:42.377472   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:11:42.377483   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:11:42.392889   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:11:42.392897   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:11:42.407831   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:11:42.407844   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:11:44.928741   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:49.931195   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:49.931303   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:49.943768   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:11:49.943846   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:49.954793   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:11:49.954861   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:49.964966   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:11:49.965033   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:49.975103   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:11:49.975177   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:49.985503   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:11:49.985573   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:49.995947   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:11:49.996014   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:50.005788   18107 logs.go:276] 0 containers: []
	W0328 12:11:50.005798   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:50.005857   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:50.016875   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:11:50.016889   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:11:50.016895   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:11:50.028622   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:50.028634   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:50.051861   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:50.051871   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:50.055872   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:11:50.055878   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:11:50.070423   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:11:50.070433   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:11:50.082919   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:11:50.082930   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:11:50.097993   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:11:50.098003   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:11:50.115477   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:11:50.115488   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:11:50.126755   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:50.126765   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:50.164324   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:50.164331   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:50.198852   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:11:50.198863   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:11:50.213691   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:11:50.213701   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:11:50.225274   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:11:50.225285   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:11:52.742190   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:11:57.743320   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:11:57.743581   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:11:57.768790   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:11:57.768919   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:11:57.786305   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:11:57.786385   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:11:57.800818   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:11:57.800893   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:11:57.812085   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:11:57.812155   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:11:57.822563   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:11:57.822633   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:11:57.833219   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:11:57.833280   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:11:57.843567   18107 logs.go:276] 0 containers: []
	W0328 12:11:57.843584   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:11:57.843655   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:11:57.856770   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:11:57.856783   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:11:57.856789   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:11:57.868523   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:11:57.868532   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:11:57.881717   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:11:57.881731   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:11:57.905533   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:11:57.905548   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:11:57.919973   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:11:57.919987   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:11:57.924046   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:11:57.924053   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:11:57.958642   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:11:57.958655   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:11:57.973440   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:11:57.973453   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:11:57.987058   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:11:57.987067   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:11:58.002213   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:11:58.002226   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:11:58.013965   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:11:58.013975   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:11:58.032255   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:11:58.032265   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:11:58.069223   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:11:58.069237   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:00.582091   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:05.584528   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:05.584911   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:05.621139   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:05.621255   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:05.642232   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:05.642322   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:05.657090   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:12:05.657156   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:05.669172   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:05.669253   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:05.680297   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:05.680355   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:05.693813   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:05.693874   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:05.703428   18107 logs.go:276] 0 containers: []
	W0328 12:12:05.703440   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:05.703495   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:05.726442   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:05.726459   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:05.726464   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:05.764223   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:05.764235   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:05.778314   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:05.778328   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:05.790172   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:05.790183   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:05.802326   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:05.802337   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:05.820695   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:05.820708   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:05.831981   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:05.831992   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:05.836120   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:05.836129   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:05.876604   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:05.876614   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:05.890351   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:05.890361   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:05.901727   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:05.901737   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:05.913278   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:05.913292   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:05.927685   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:05.927699   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:08.454599   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:13.457022   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:13.457197   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:13.469077   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:13.469140   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:13.479523   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:13.479583   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:13.489823   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:12:13.489885   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:13.500261   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:13.500320   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:13.510535   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:13.510605   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:13.520704   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:13.520768   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:13.530304   18107 logs.go:276] 0 containers: []
	W0328 12:12:13.530319   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:13.530376   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:13.540925   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:13.540940   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:13.540945   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:13.577296   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:13.577303   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:13.581240   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:13.581246   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:13.592691   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:13.592701   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:13.604080   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:13.604094   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:13.621357   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:13.621366   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:13.632469   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:13.632479   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:13.655755   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:13.655762   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:13.666940   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:13.666951   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:13.705988   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:13.706000   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:13.723775   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:13.723788   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:13.737515   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:13.737527   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:13.748578   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:13.748590   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:16.265615   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:21.268147   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:21.268508   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:21.305890   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:21.306018   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:21.326645   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:21.326763   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:21.342396   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:12:21.342470   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:21.355865   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:21.355940   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:21.369454   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:21.369527   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:21.380006   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:21.380061   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:21.389448   18107 logs.go:276] 0 containers: []
	W0328 12:12:21.389460   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:21.389520   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:21.399852   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:21.399870   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:21.399875   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:21.424158   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:21.424168   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:21.434921   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:21.434931   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:21.469568   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:21.469579   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:21.484361   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:21.484373   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:21.496326   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:21.496337   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:21.512463   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:21.512473   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:21.526590   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:21.526600   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:21.543223   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:21.543237   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:21.581068   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:21.581074   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:21.585090   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:21.585100   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:21.598829   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:21.598841   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:21.615808   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:21.615818   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:24.129748   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:29.131480   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:29.131875   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:29.164031   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:29.164159   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:29.184898   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:29.184974   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:29.199985   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:12:29.200065   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:29.212044   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:29.212112   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:29.222499   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:29.222568   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:29.233368   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:29.233431   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:29.243740   18107 logs.go:276] 0 containers: []
	W0328 12:12:29.243751   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:29.243809   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:29.254623   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:29.254640   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:29.254645   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:29.268678   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:29.268691   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:29.280545   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:29.280556   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:29.292244   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:29.292254   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:29.308887   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:29.308899   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:29.325554   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:29.325564   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:29.329610   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:29.329619   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:29.362822   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:29.362833   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:29.377584   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:29.377593   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:29.401926   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:29.401933   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:29.414511   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:29.414524   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:29.452674   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:29.452685   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:29.466954   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:29.466964   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:31.980530   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:36.981529   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:36.981944   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:37.015379   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:37.015521   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:37.034254   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:37.034357   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:37.048701   18107 logs.go:276] 2 containers: [d1fea0c2576d e6b1891c2b93]
	I0328 12:12:37.048777   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:37.061131   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:37.061190   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:37.071664   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:37.071728   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:37.081831   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:37.081901   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:37.091806   18107 logs.go:276] 0 containers: []
	W0328 12:12:37.091820   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:37.091877   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:37.101901   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:37.101915   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:37.101920   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:37.113040   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:37.113050   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:37.136987   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:37.136994   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:37.140882   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:37.140891   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:37.152228   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:37.152240   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:37.163623   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:37.163635   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:37.180847   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:37.180859   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:37.192624   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:37.192637   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:37.206906   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:37.206917   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:37.219957   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:37.219971   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:37.258261   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:37.258273   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:37.293214   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:37.293225   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:37.307668   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:37.307680   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:39.824304   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:44.827078   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:44.827506   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:44.871837   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:44.871969   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:44.892578   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:44.892683   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:44.907865   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:12:44.907950   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:44.919554   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:44.919623   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:44.930582   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:44.930649   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:44.945174   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:44.945240   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:44.955065   18107 logs.go:276] 0 containers: []
	W0328 12:12:44.955075   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:44.955126   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:44.965747   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:44.965762   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:12:44.965767   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:12:44.977293   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:44.977306   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:44.996477   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:44.996489   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:45.020149   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:45.020159   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:45.031545   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:45.031558   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:45.067474   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:45.067481   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:45.090023   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:45.090033   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:45.102252   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:45.102265   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:45.118944   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:45.118957   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:45.130596   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:45.130606   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:45.147339   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:45.147349   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:45.158746   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:45.158755   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:45.194674   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:45.194687   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:45.209244   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:12:45.209256   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:12:45.220126   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:45.220138   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:47.726739   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:12:52.729581   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:12:52.729998   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:12:52.770239   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:12:52.770363   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:12:52.792795   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:12:52.792902   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:12:52.808318   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:12:52.808385   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:12:52.821127   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:12:52.821198   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:12:52.831998   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:12:52.832057   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:12:52.842939   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:12:52.843008   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:12:52.857590   18107 logs.go:276] 0 containers: []
	W0328 12:12:52.857604   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:12:52.857664   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:12:52.867868   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:12:52.867884   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:12:52.867889   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:12:52.879404   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:12:52.879414   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:12:52.891113   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:12:52.891123   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:12:52.902864   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:12:52.902875   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:12:52.917596   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:12:52.917608   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:12:52.931991   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:12:52.932003   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:12:52.943307   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:12:52.943320   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:12:52.947910   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:12:52.947917   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:12:52.982524   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:12:52.982536   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:12:52.994288   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:12:52.994300   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:12:53.006107   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:12:53.006117   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:12:53.019860   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:12:53.019873   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:12:53.032115   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:12:53.032127   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:12:53.055993   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:12:53.056002   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:12:53.092924   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:12:53.092940   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:12:55.612014   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:00.614680   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:00.614889   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:00.643473   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:00.643596   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:00.661958   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:00.662038   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:00.675065   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:00.677518   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:00.688255   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:00.688327   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:00.698206   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:00.698270   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:00.708339   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:00.708404   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:00.718607   18107 logs.go:276] 0 containers: []
	W0328 12:13:00.718617   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:00.718673   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:00.728742   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:00.728761   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:00.728767   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:00.764010   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:00.764025   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:00.778757   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:00.778766   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:00.790705   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:00.790718   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:00.801824   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:00.801834   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:00.812938   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:00.812947   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:00.826906   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:00.826928   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:00.850715   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:00.850723   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:00.862853   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:00.862864   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:00.876916   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:00.876926   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:00.888550   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:00.888561   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:00.899695   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:00.899707   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:00.911009   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:00.911021   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:00.928255   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:00.928264   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:00.965900   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:00.965910   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:03.470720   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:08.473459   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:08.473871   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:08.506101   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:08.506227   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:08.526527   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:08.526622   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:08.543532   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:08.543602   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:08.555353   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:08.555418   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:08.566063   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:08.566125   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:08.576662   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:08.576731   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:08.586939   18107 logs.go:276] 0 containers: []
	W0328 12:13:08.586952   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:08.587000   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:08.598429   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:08.598444   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:08.598450   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:08.609906   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:08.609915   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:08.628072   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:08.628085   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:08.640123   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:08.640134   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:08.655291   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:08.655304   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:08.667187   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:08.667196   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:08.681715   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:08.681727   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:08.693006   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:08.693021   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:08.704831   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:08.704841   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:08.738741   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:08.738753   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:08.753007   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:08.753019   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:08.764447   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:08.764460   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:08.788118   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:08.788126   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:08.800249   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:08.800261   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:08.837882   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:08.837892   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:11.342255   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:16.345176   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:16.345574   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:16.395165   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:16.395282   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:16.413750   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:16.413841   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:16.427289   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:16.427360   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:16.440509   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:16.440575   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:16.450602   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:16.450676   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:16.462056   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:16.462117   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:16.472478   18107 logs.go:276] 0 containers: []
	W0328 12:13:16.472490   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:16.472548   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:16.483006   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:16.483021   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:16.483026   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:16.494236   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:16.494246   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:16.505520   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:16.505531   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:16.517720   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:16.517733   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:16.555186   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:16.555197   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:16.566739   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:16.566754   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:16.578117   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:16.578128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:16.592233   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:16.592247   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:16.609644   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:16.609654   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:16.613949   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:16.613959   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:16.648755   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:16.648771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:16.669111   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:16.669121   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:16.680871   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:16.680883   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:16.706414   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:16.706425   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:16.721125   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:16.721137   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:19.237789   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:24.240498   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:24.240577   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:24.251645   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:24.251716   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:24.263276   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:24.263340   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:24.275432   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:24.275493   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:24.286750   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:24.286809   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:24.297491   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:24.297550   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:24.308681   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:24.308737   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:24.321686   18107 logs.go:276] 0 containers: []
	W0328 12:13:24.321698   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:24.321741   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:24.336285   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:24.336301   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:24.336317   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:24.374762   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:24.374771   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:24.388986   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:24.388997   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:24.404721   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:24.404736   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:24.421593   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:24.421606   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:24.434893   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:24.434905   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:24.474823   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:24.474834   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:24.479700   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:24.479711   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:24.493318   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:24.493336   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:24.506607   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:24.506616   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:24.522648   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:24.522660   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:24.541025   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:24.541037   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:24.554333   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:24.554345   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:24.580235   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:24.580247   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:24.598017   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:24.598028   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:27.118017   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:32.120485   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:32.120717   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:32.136794   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:32.136866   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:32.157873   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:32.157932   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:32.168356   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:32.168417   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:32.179147   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:32.179213   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:32.189687   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:32.189757   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:32.200044   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:32.200102   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:32.209722   18107 logs.go:276] 0 containers: []
	W0328 12:13:32.209732   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:32.209785   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:32.219801   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:32.219817   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:32.219821   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:32.258553   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:32.258564   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:32.272731   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:32.272744   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:32.283923   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:32.283933   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:32.295168   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:32.295182   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:32.323983   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:32.323997   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:32.350189   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:32.350197   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:32.354447   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:32.354452   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:32.366863   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:32.366873   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:32.378517   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:32.378527   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:32.390143   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:32.390155   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:32.404211   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:32.404224   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:32.418790   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:32.418802   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:32.430948   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:32.430959   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:32.474041   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:32.474055   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:34.987468   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:39.989882   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:39.990366   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:40.035168   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:40.035285   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:40.053973   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:40.054059   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:40.072033   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:40.072104   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:40.083536   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:40.083610   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:40.102142   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:40.102203   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:40.112632   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:40.112690   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:40.123126   18107 logs.go:276] 0 containers: []
	W0328 12:13:40.123139   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:40.123196   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:40.132964   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:40.132985   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:40.132991   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:40.169229   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:40.169241   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:40.180639   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:40.180651   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:40.192394   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:40.192407   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:40.206533   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:40.206545   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:40.221923   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:40.221932   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:40.246475   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:40.246484   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:40.258266   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:40.258279   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:40.293479   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:40.293491   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:40.307928   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:40.307937   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:40.322849   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:40.322861   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:40.334156   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:40.334165   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:40.351329   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:40.351338   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:40.363702   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:40.363713   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:40.368328   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:40.368336   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:42.889445   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:47.892028   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:47.892131   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:47.902978   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:47.903034   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:47.914194   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:47.914259   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:47.928169   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:47.928226   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:47.938912   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:47.938966   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:47.949865   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:47.949920   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:47.962624   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:47.962672   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:47.974087   18107 logs.go:276] 0 containers: []
	W0328 12:13:47.974097   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:47.974138   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:47.985757   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:47.985770   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:47.985774   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:48.003527   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:48.003538   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:48.029487   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:48.029496   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:48.033748   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:48.033754   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:48.048391   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:48.048405   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:48.060587   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:48.060602   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:48.076833   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:48.076847   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:48.115734   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:48.115746   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:48.127891   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:48.127904   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:48.143311   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:48.143322   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:48.183166   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:48.183183   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:48.196742   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:48.196754   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:48.211483   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:48.211497   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:48.226890   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:48.226902   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:48.239711   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:48.239722   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:50.754870   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:13:55.757281   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:13:55.757676   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:13:55.790015   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:13:55.790134   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:13:55.809725   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:13:55.809839   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:13:55.824485   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:13:55.824559   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:13:55.836305   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:13:55.836364   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:13:55.847288   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:13:55.847360   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:13:55.859647   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:13:55.859716   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:13:55.870120   18107 logs.go:276] 0 containers: []
	W0328 12:13:55.870130   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:13:55.870181   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:13:55.880462   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:13:55.880481   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:13:55.880486   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:13:55.885133   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:13:55.885143   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:13:55.899059   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:13:55.899072   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:13:55.911039   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:13:55.911052   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:13:55.925690   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:13:55.925700   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:13:55.938211   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:13:55.938221   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:13:55.972192   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:13:55.972202   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:13:55.983909   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:13:55.983921   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:13:55.995158   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:13:55.995168   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:13:56.031559   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:13:56.031570   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:13:56.044777   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:13:56.044787   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:13:56.056379   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:13:56.056389   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:13:56.068537   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:13:56.068548   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:13:56.080127   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:13:56.080140   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:13:56.097029   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:13:56.097039   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:13:58.622582   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:03.623947   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:03.624461   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:14:03.662271   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:14:03.662399   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:14:03.684138   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:14:03.684249   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:14:03.700135   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:14:03.700200   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:14:03.712394   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:14:03.712465   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:14:03.724067   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:14:03.724131   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:14:03.734679   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:14:03.734747   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:14:03.744170   18107 logs.go:276] 0 containers: []
	W0328 12:14:03.744182   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:14:03.744234   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:14:03.757618   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:14:03.757633   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:14:03.757638   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:14:03.769162   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:14:03.769173   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:14:03.806759   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:14:03.806770   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:14:03.810863   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:14:03.810873   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:14:03.824513   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:14:03.824524   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:14:03.839027   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:14:03.839038   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:14:03.850619   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:14:03.850629   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:14:03.868166   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:14:03.868176   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:14:03.893554   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:14:03.893564   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:14:03.905574   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:14:03.905588   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:14:03.941437   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:14:03.941451   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:14:03.957902   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:14:03.957913   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:14:03.970542   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:14:03.970554   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:14:03.982253   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:14:03.982266   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:14:03.993352   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:14:03.993363   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:14:06.509356   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:11.510949   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:11.511456   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:14:11.548320   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:14:11.548453   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:14:11.570046   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:14:11.570162   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:14:11.585800   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:14:11.585876   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:14:11.598477   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:14:11.598546   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:14:11.617522   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:14:11.617587   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:14:11.628655   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:14:11.628719   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:14:11.638725   18107 logs.go:276] 0 containers: []
	W0328 12:14:11.638735   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:14:11.638792   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:14:11.649749   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:14:11.649766   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:14:11.649770   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:14:11.688516   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:14:11.688523   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:14:11.704033   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:14:11.704043   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:14:11.723943   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:14:11.723957   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:14:11.739140   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:14:11.739149   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:14:11.750993   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:14:11.751004   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:14:11.755508   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:14:11.755515   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:14:11.773188   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:14:11.773197   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:14:11.807689   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:14:11.807701   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:14:11.821505   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:14:11.821516   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:14:11.833198   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:14:11.833210   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:14:11.848452   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:14:11.848464   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:14:11.874488   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:14:11.874496   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:14:11.888531   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:14:11.888541   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:14:11.900075   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:14:11.900084   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:14:14.413720   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:19.414386   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:19.414877   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:14:19.455461   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:14:19.455585   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:14:19.475673   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:14:19.475782   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:14:19.490604   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:14:19.490677   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:14:19.502661   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:14:19.502734   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:14:19.513901   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:14:19.513971   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:14:19.524861   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:14:19.524932   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:14:19.538970   18107 logs.go:276] 0 containers: []
	W0328 12:14:19.538979   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:14:19.539027   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:14:19.566151   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:14:19.566172   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:14:19.566177   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:14:19.611664   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:14:19.611680   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:14:19.625643   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:14:19.625653   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:14:19.629877   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:14:19.629883   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:14:19.644064   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:14:19.644077   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:14:19.655436   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:14:19.655446   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:14:19.670981   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:14:19.670994   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:14:19.685221   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:14:19.685230   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:14:19.709278   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:14:19.709285   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:14:19.745488   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:14:19.745501   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:14:19.757171   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:14:19.757183   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:14:19.768794   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:14:19.768805   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:14:19.780575   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:14:19.780586   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:14:19.793701   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:14:19.793709   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:14:19.810668   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:14:19.810676   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:14:22.324077   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:27.326341   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:27.326454   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:14:27.339641   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:14:27.339710   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:14:27.350829   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:14:27.350894   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:14:27.361709   18107 logs.go:276] 4 containers: [84909d1e88aa 66001a4d782d d1fea0c2576d e6b1891c2b93]
	I0328 12:14:27.361782   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:14:27.371998   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:14:27.372064   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:14:27.382060   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:14:27.382124   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:14:27.392700   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:14:27.392763   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:14:27.402771   18107 logs.go:276] 0 containers: []
	W0328 12:14:27.402782   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:14:27.402834   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:14:27.413111   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:14:27.413132   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:14:27.413138   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:14:27.417888   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:14:27.417896   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:14:27.435773   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:14:27.435786   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:14:27.459254   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:14:27.459262   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:14:27.474767   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:14:27.474776   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:14:27.492029   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:14:27.492041   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:14:27.529550   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:14:27.529558   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:14:27.547038   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:14:27.547050   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:14:27.560523   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:14:27.560536   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:14:27.572906   18107 logs.go:123] Gathering logs for coredns [e6b1891c2b93] ...
	I0328 12:14:27.572918   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6b1891c2b93"
	I0328 12:14:27.584640   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:14:27.584650   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:14:27.619115   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:14:27.619128   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:14:27.633594   18107 logs.go:123] Gathering logs for coredns [d1fea0c2576d] ...
	I0328 12:14:27.633604   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1fea0c2576d"
	I0328 12:14:27.645309   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:14:27.645318   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:14:27.657084   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:14:27.657096   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:14:30.170870   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:35.173727   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:35.174096   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 12:14:35.206495   18107 logs.go:276] 1 containers: [c9a5627e0ac7]
	I0328 12:14:35.206621   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 12:14:35.227682   18107 logs.go:276] 1 containers: [f373a03cea44]
	I0328 12:14:35.227804   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 12:14:35.242883   18107 logs.go:276] 4 containers: [aa76dd4fedd8 b4f8c8ad14f6 84909d1e88aa 66001a4d782d]
	I0328 12:14:35.242963   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 12:14:35.254854   18107 logs.go:276] 1 containers: [4d167dfc7911]
	I0328 12:14:35.254918   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 12:14:35.266057   18107 logs.go:276] 1 containers: [16fbad62624f]
	I0328 12:14:35.266125   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 12:14:35.282859   18107 logs.go:276] 1 containers: [179ba85bc61f]
	I0328 12:14:35.282924   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 12:14:35.293275   18107 logs.go:276] 0 containers: []
	W0328 12:14:35.293288   18107 logs.go:278] No container was found matching "kindnet"
	I0328 12:14:35.293348   18107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0328 12:14:35.306936   18107 logs.go:276] 1 containers: [8de216c14235]
	I0328 12:14:35.306954   18107 logs.go:123] Gathering logs for coredns [84909d1e88aa] ...
	I0328 12:14:35.306958   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84909d1e88aa"
	I0328 12:14:35.318774   18107 logs.go:123] Gathering logs for coredns [66001a4d782d] ...
	I0328 12:14:35.318786   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66001a4d782d"
	I0328 12:14:35.330386   18107 logs.go:123] Gathering logs for kube-proxy [16fbad62624f] ...
	I0328 12:14:35.330397   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16fbad62624f"
	I0328 12:14:35.342113   18107 logs.go:123] Gathering logs for etcd [f373a03cea44] ...
	I0328 12:14:35.342125   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f373a03cea44"
	I0328 12:14:35.356088   18107 logs.go:123] Gathering logs for kube-controller-manager [179ba85bc61f] ...
	I0328 12:14:35.356100   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 179ba85bc61f"
	I0328 12:14:35.373174   18107 logs.go:123] Gathering logs for storage-provisioner [8de216c14235] ...
	I0328 12:14:35.373184   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de216c14235"
	I0328 12:14:35.384503   18107 logs.go:123] Gathering logs for container status ...
	I0328 12:14:35.384515   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 12:14:35.395920   18107 logs.go:123] Gathering logs for coredns [b4f8c8ad14f6] ...
	I0328 12:14:35.395931   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f8c8ad14f6"
	I0328 12:14:35.407038   18107 logs.go:123] Gathering logs for Docker ...
	I0328 12:14:35.407049   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 12:14:35.430611   18107 logs.go:123] Gathering logs for kubelet ...
	I0328 12:14:35.430618   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 12:14:35.469402   18107 logs.go:123] Gathering logs for dmesg ...
	I0328 12:14:35.469412   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 12:14:35.473863   18107 logs.go:123] Gathering logs for describe nodes ...
	I0328 12:14:35.473871   18107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 12:14:35.507491   18107 logs.go:123] Gathering logs for kube-apiserver [c9a5627e0ac7] ...
	I0328 12:14:35.507503   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9a5627e0ac7"
	I0328 12:14:35.521735   18107 logs.go:123] Gathering logs for coredns [aa76dd4fedd8] ...
	I0328 12:14:35.521746   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa76dd4fedd8"
	I0328 12:14:35.532685   18107 logs.go:123] Gathering logs for kube-scheduler [4d167dfc7911] ...
	I0328 12:14:35.532700   18107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d167dfc7911"
	I0328 12:14:38.049731   18107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0328 12:14:43.052038   18107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0328 12:14:43.058158   18107 out.go:177] 
	W0328 12:14:43.062300   18107 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0328 12:14:43.062333   18107 out.go:239] * 
	* 
	W0328 12:14:43.065095   18107 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:14:43.077160   18107 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-732000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (582.59s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-615000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-615000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.974030125s)

                                                
                                                
-- stdout --
	* [pause-615000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-615000" primary control-plane node in "pause-615000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-615000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-615000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-615000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-615000 -n pause-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-615000 -n pause-615000: exit status 7 (66.011333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 : exit status 80 (9.875811s)

                                                
                                                
-- stdout --
	* [NoKubernetes-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-860000" primary control-plane node in "NoKubernetes-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000: exit status 7 (57.68925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 : exit status 80 (5.823919542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-860000
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000: exit status 7 (63.443375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 : exit status 80 (5.82268625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-860000
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000: exit status 7 (50.258916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 : exit status 80 (5.842202291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-860000
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-860000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-860000 -n NoKubernetes-860000: exit status 7 (55.943792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.958304s)

                                                
                                                
-- stdout --
	* [auto-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-772000" primary control-plane node in "auto-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:13:12.598450   18489 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:13:12.598583   18489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:12.598586   18489 out.go:304] Setting ErrFile to fd 2...
	I0328 12:13:12.598588   18489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:12.598713   18489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:13:12.599842   18489 out.go:298] Setting JSON to false
	I0328 12:13:12.615692   18489 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11564,"bootTime":1711641628,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:13:12.615758   18489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:13:12.619981   18489 out.go:177] * [auto-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:13:12.627909   18489 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:13:12.631754   18489 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:13:12.627963   18489 notify.go:220] Checking for updates...
	I0328 12:13:12.639826   18489 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:13:12.647894   18489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:13:12.650919   18489 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:13:12.654843   18489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:13:12.658205   18489 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:13:12.658274   18489 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:13:12.658330   18489 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:13:12.661892   18489 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:13:12.668888   18489 start.go:297] selected driver: qemu2
	I0328 12:13:12.668894   18489 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:13:12.668900   18489 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:13:12.671216   18489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:13:12.675896   18489 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:13:12.678887   18489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:13:12.678926   18489 cni.go:84] Creating CNI manager for ""
	I0328 12:13:12.678933   18489 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:13:12.678937   18489 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:13:12.678971   18489 start.go:340] cluster config:
	{Name:auto-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:13:12.683737   18489 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:13:12.690876   18489 out.go:177] * Starting "auto-772000" primary control-plane node in "auto-772000" cluster
	I0328 12:13:12.694862   18489 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:13:12.694875   18489 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:13:12.694894   18489 cache.go:56] Caching tarball of preloaded images
	I0328 12:13:12.694959   18489 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:13:12.694964   18489 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:13:12.695023   18489 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/auto-772000/config.json ...
	I0328 12:13:12.695034   18489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/auto-772000/config.json: {Name:mkfac1c93b777b5c4f10b1f1cb70cde8af6276fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:13:12.695259   18489 start.go:360] acquireMachinesLock for auto-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:12.695293   18489 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "auto-772000"
	I0328 12:13:12.695306   18489 start.go:93] Provisioning new machine with config: &{Name:auto-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:12.695351   18489 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:12.702870   18489 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:12.721282   18489 start.go:159] libmachine.API.Create for "auto-772000" (driver="qemu2")
	I0328 12:13:12.721330   18489 client.go:168] LocalClient.Create starting
	I0328 12:13:12.721421   18489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:12.721453   18489 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:12.721467   18489 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:12.721520   18489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:12.721546   18489 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:12.721556   18489 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:12.721954   18489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:12.874119   18489 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:13.116708   18489 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:13.116719   18489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:13.116943   18489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:13.129952   18489 main.go:141] libmachine: STDOUT: 
	I0328 12:13:13.129979   18489 main.go:141] libmachine: STDERR: 
	I0328 12:13:13.130038   18489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2 +20000M
	I0328 12:13:13.140954   18489 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:13.140971   18489 main.go:141] libmachine: STDERR: 
	I0328 12:13:13.140986   18489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:13.140992   18489 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:13.141026   18489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:51:82:c4:10:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:13.142821   18489 main.go:141] libmachine: STDOUT: 
	I0328 12:13:13.142840   18489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:13.142864   18489 client.go:171] duration metric: took 421.518667ms to LocalClient.Create
	I0328 12:13:15.145105   18489 start.go:128] duration metric: took 2.44969075s to createHost
	I0328 12:13:15.145180   18489 start.go:83] releasing machines lock for "auto-772000", held for 2.449848084s
	W0328 12:13:15.145290   18489 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:15.152514   18489 out.go:177] * Deleting "auto-772000" in qemu2 ...
	W0328 12:13:15.182037   18489 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:15.182071   18489 start.go:728] Will try again in 5 seconds ...
	I0328 12:13:20.182771   18489 start.go:360] acquireMachinesLock for auto-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:20.183278   18489 start.go:364] duration metric: took 404.792µs to acquireMachinesLock for "auto-772000"
	I0328 12:13:20.183346   18489 start.go:93] Provisioning new machine with config: &{Name:auto-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:20.183617   18489 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:20.193308   18489 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:20.242899   18489 start.go:159] libmachine.API.Create for "auto-772000" (driver="qemu2")
	I0328 12:13:20.242952   18489 client.go:168] LocalClient.Create starting
	I0328 12:13:20.243078   18489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:20.243151   18489 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:20.243167   18489 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:20.243227   18489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:20.243271   18489 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:20.243284   18489 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:20.243838   18489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:20.402404   18489 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:20.460203   18489 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:20.460210   18489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:20.460400   18489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:20.473148   18489 main.go:141] libmachine: STDOUT: 
	I0328 12:13:20.473178   18489 main.go:141] libmachine: STDERR: 
	I0328 12:13:20.473233   18489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2 +20000M
	I0328 12:13:20.485110   18489 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:20.485129   18489 main.go:141] libmachine: STDERR: 
	I0328 12:13:20.485145   18489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:20.485151   18489 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:20.485192   18489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d1:9c:9d:d7:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/auto-772000/disk.qcow2
	I0328 12:13:20.487114   18489 main.go:141] libmachine: STDOUT: 
	I0328 12:13:20.487141   18489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:20.487156   18489 client.go:171] duration metric: took 244.195708ms to LocalClient.Create
	I0328 12:13:22.489307   18489 start.go:128] duration metric: took 2.30563725s to createHost
	I0328 12:13:22.489346   18489 start.go:83] releasing machines lock for "auto-772000", held for 2.306018167s
	W0328 12:13:22.489590   18489 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:22.498927   18489 out.go:177] 
	W0328 12:13:22.505028   18489 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:13:22.505047   18489 out.go:239] * 
	* 
	W0328 12:13:22.506674   18489 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:13:22.516913   18489 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.89576025s)

                                                
                                                
-- stdout --
	* [kindnet-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-772000" primary control-plane node in "kindnet-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:13:24.905254   18605 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:13:24.905414   18605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:24.905417   18605 out.go:304] Setting ErrFile to fd 2...
	I0328 12:13:24.905419   18605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:24.905544   18605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:13:24.906601   18605 out.go:298] Setting JSON to false
	I0328 12:13:24.923015   18605 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11576,"bootTime":1711641628,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:13:24.923088   18605 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:13:24.930122   18605 out.go:177] * [kindnet-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:13:24.938034   18605 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:13:24.941973   18605 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:13:24.938062   18605 notify.go:220] Checking for updates...
	I0328 12:13:24.945007   18605 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:13:24.947952   18605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:13:24.950995   18605 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:13:24.957966   18605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:13:24.961399   18605 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:13:24.961476   18605 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:13:24.961517   18605 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:13:24.965737   18605 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:13:24.973977   18605 start.go:297] selected driver: qemu2
	I0328 12:13:24.973982   18605 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:13:24.973999   18605 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:13:24.976352   18605 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:13:24.978926   18605 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:13:24.983078   18605 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:13:24.983115   18605 cni.go:84] Creating CNI manager for "kindnet"
	I0328 12:13:24.983120   18605 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 12:13:24.983146   18605 start.go:340] cluster config:
	{Name:kindnet-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:13:24.987398   18605 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:13:24.996005   18605 out.go:177] * Starting "kindnet-772000" primary control-plane node in "kindnet-772000" cluster
	I0328 12:13:24.999949   18605 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:13:24.999961   18605 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:13:24.999969   18605 cache.go:56] Caching tarball of preloaded images
	I0328 12:13:25.000011   18605 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:13:25.000016   18605 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:13:25.000068   18605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kindnet-772000/config.json ...
	I0328 12:13:25.000077   18605 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kindnet-772000/config.json: {Name:mkd68070fb66704ea3b3f097e3f0d28c4004f7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:13:25.000276   18605 start.go:360] acquireMachinesLock for kindnet-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:25.000303   18605 start.go:364] duration metric: took 22.125µs to acquireMachinesLock for "kindnet-772000"
	I0328 12:13:25.000315   18605 start.go:93] Provisioning new machine with config: &{Name:kindnet-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:25.000339   18605 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:25.008025   18605 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:25.023310   18605 start.go:159] libmachine.API.Create for "kindnet-772000" (driver="qemu2")
	I0328 12:13:25.023346   18605 client.go:168] LocalClient.Create starting
	I0328 12:13:25.023411   18605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:25.023452   18605 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:25.023460   18605 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:25.023507   18605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:25.023528   18605 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:25.023536   18605 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:25.023886   18605 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:25.175107   18605 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:25.349042   18605 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:25.349052   18605 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:25.349261   18605 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:25.361850   18605 main.go:141] libmachine: STDOUT: 
	I0328 12:13:25.361879   18605 main.go:141] libmachine: STDERR: 
	I0328 12:13:25.361934   18605 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2 +20000M
	I0328 12:13:25.373146   18605 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:25.373174   18605 main.go:141] libmachine: STDERR: 
	I0328 12:13:25.373188   18605 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:25.373192   18605 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:25.373220   18605 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:71:43:55:ea:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:25.374979   18605 main.go:141] libmachine: STDOUT: 
	I0328 12:13:25.374998   18605 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:25.375019   18605 client.go:171] duration metric: took 351.661833ms to LocalClient.Create
	I0328 12:13:27.377278   18605 start.go:128] duration metric: took 2.376885667s to createHost
	I0328 12:13:27.377378   18605 start.go:83] releasing machines lock for "kindnet-772000", held for 2.377036s
	W0328 12:13:27.377459   18605 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:27.388528   18605 out.go:177] * Deleting "kindnet-772000" in qemu2 ...
	W0328 12:13:27.419528   18605 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:27.419562   18605 start.go:728] Will try again in 5 seconds ...
	I0328 12:13:32.421717   18605 start.go:360] acquireMachinesLock for kindnet-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:32.421816   18605 start.go:364] duration metric: took 77.375µs to acquireMachinesLock for "kindnet-772000"
	I0328 12:13:32.421834   18605 start.go:93] Provisioning new machine with config: &{Name:kindnet-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:32.421886   18605 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:32.425132   18605 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:32.440985   18605 start.go:159] libmachine.API.Create for "kindnet-772000" (driver="qemu2")
	I0328 12:13:32.441024   18605 client.go:168] LocalClient.Create starting
	I0328 12:13:32.441096   18605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:32.441135   18605 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:32.441143   18605 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:32.441200   18605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:32.441224   18605 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:32.441230   18605 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:32.441536   18605 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:32.592326   18605 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:32.700132   18605 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:32.700139   18605 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:32.700330   18605 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:32.712509   18605 main.go:141] libmachine: STDOUT: 
	I0328 12:13:32.712532   18605 main.go:141] libmachine: STDERR: 
	I0328 12:13:32.712590   18605 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2 +20000M
	I0328 12:13:32.723497   18605 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:32.723515   18605 main.go:141] libmachine: STDERR: 
	I0328 12:13:32.723526   18605 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:32.723541   18605 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:32.723574   18605 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e1:d4:8f:37:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kindnet-772000/disk.qcow2
	I0328 12:13:32.725338   18605 main.go:141] libmachine: STDOUT: 
	I0328 12:13:32.725359   18605 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:32.725375   18605 client.go:171] duration metric: took 284.342375ms to LocalClient.Create
	I0328 12:13:34.727636   18605 start.go:128] duration metric: took 2.305684125s to createHost
	I0328 12:13:34.727701   18605 start.go:83] releasing machines lock for "kindnet-772000", held for 2.305845333s
	W0328 12:13:34.728098   18605 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:34.736534   18605 out.go:177] 
	W0328 12:13:34.744542   18605 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:13:34.744598   18605 out.go:239] * 
	* 
	W0328 12:13:34.747310   18605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:13:34.756501   18605 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.871781833s)

                                                
                                                
-- stdout --
	* [calico-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-772000" primary control-plane node in "calico-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:13:37.187747   18719 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:13:37.187879   18719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:37.187882   18719 out.go:304] Setting ErrFile to fd 2...
	I0328 12:13:37.187884   18719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:37.187999   18719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:13:37.189065   18719 out.go:298] Setting JSON to false
	I0328 12:13:37.205651   18719 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11589,"bootTime":1711641628,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:13:37.205740   18719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:13:37.211539   18719 out.go:177] * [calico-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:13:37.219537   18719 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:13:37.224521   18719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:13:37.219577   18719 notify.go:220] Checking for updates...
	I0328 12:13:37.231183   18719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:13:37.236079   18719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:13:37.239496   18719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:13:37.242513   18719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:13:37.245899   18719 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:13:37.245972   18719 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:13:37.246036   18719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:13:37.250483   18719 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:13:37.257495   18719 start.go:297] selected driver: qemu2
	I0328 12:13:37.257502   18719 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:13:37.257510   18719 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:13:37.259820   18719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:13:37.262450   18719 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:13:37.265532   18719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:13:37.265564   18719 cni.go:84] Creating CNI manager for "calico"
	I0328 12:13:37.265570   18719 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0328 12:13:37.265611   18719 start.go:340] cluster config:
	{Name:calico-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:13:37.269771   18719 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:13:37.277489   18719 out.go:177] * Starting "calico-772000" primary control-plane node in "calico-772000" cluster
	I0328 12:13:37.281516   18719 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:13:37.281532   18719 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:13:37.281545   18719 cache.go:56] Caching tarball of preloaded images
	I0328 12:13:37.281610   18719 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:13:37.281616   18719 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:13:37.281682   18719 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/calico-772000/config.json ...
	I0328 12:13:37.281693   18719 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/calico-772000/config.json: {Name:mke7c88456eab31220583384db409954c3266391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:13:37.281894   18719 start.go:360] acquireMachinesLock for calico-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:37.281923   18719 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "calico-772000"
	I0328 12:13:37.281935   18719 start.go:93] Provisioning new machine with config: &{Name:calico-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:37.281969   18719 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:37.289464   18719 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:37.305062   18719 start.go:159] libmachine.API.Create for "calico-772000" (driver="qemu2")
	I0328 12:13:37.305086   18719 client.go:168] LocalClient.Create starting
	I0328 12:13:37.305146   18719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:37.305174   18719 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:37.305186   18719 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:37.305232   18719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:37.305253   18719 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:37.305259   18719 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:37.305631   18719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:37.455676   18719 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:37.571383   18719 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:37.571390   18719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:37.571561   18719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:37.584109   18719 main.go:141] libmachine: STDOUT: 
	I0328 12:13:37.584136   18719 main.go:141] libmachine: STDERR: 
	I0328 12:13:37.584187   18719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2 +20000M
	I0328 12:13:37.595307   18719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:37.595326   18719 main.go:141] libmachine: STDERR: 
	I0328 12:13:37.595346   18719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:37.595350   18719 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:37.595375   18719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:60:2d:e9:87:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:37.597133   18719 main.go:141] libmachine: STDOUT: 
	I0328 12:13:37.597148   18719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:37.597166   18719 client.go:171] duration metric: took 292.07125ms to LocalClient.Create
	I0328 12:13:39.599503   18719 start.go:128] duration metric: took 2.317470958s to createHost
	I0328 12:13:39.599598   18719 start.go:83] releasing machines lock for "calico-772000", held for 2.317637667s
	W0328 12:13:39.599654   18719 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:39.615777   18719 out.go:177] * Deleting "calico-772000" in qemu2 ...
	W0328 12:13:39.644795   18719 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:39.644835   18719 start.go:728] Will try again in 5 seconds ...
	I0328 12:13:44.647082   18719 start.go:360] acquireMachinesLock for calico-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:44.647480   18719 start.go:364] duration metric: took 314.458µs to acquireMachinesLock for "calico-772000"
	I0328 12:13:44.647539   18719 start.go:93] Provisioning new machine with config: &{Name:calico-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:44.647746   18719 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:44.655810   18719 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:44.698481   18719 start.go:159] libmachine.API.Create for "calico-772000" (driver="qemu2")
	I0328 12:13:44.698529   18719 client.go:168] LocalClient.Create starting
	I0328 12:13:44.698625   18719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:44.698680   18719 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:44.698698   18719 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:44.698760   18719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:44.698815   18719 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:44.698824   18719 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:44.699445   18719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:44.859194   18719 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:44.959099   18719 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:44.959108   18719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:44.959277   18719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:44.971453   18719 main.go:141] libmachine: STDOUT: 
	I0328 12:13:44.971475   18719 main.go:141] libmachine: STDERR: 
	I0328 12:13:44.971545   18719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2 +20000M
	I0328 12:13:44.982555   18719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:44.982572   18719 main.go:141] libmachine: STDERR: 
	I0328 12:13:44.982590   18719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:44.982594   18719 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:44.982638   18719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:22:5b:43:25:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/calico-772000/disk.qcow2
	I0328 12:13:44.984442   18719 main.go:141] libmachine: STDOUT: 
	I0328 12:13:44.984456   18719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:44.984471   18719 client.go:171] duration metric: took 285.933584ms to LocalClient.Create
	I0328 12:13:46.986713   18719 start.go:128] duration metric: took 2.338906958s to createHost
	I0328 12:13:46.986824   18719 start.go:83] releasing machines lock for "calico-772000", held for 2.3392965s
	W0328 12:13:46.987235   18719 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:46.996808   18719 out.go:177] 
	W0328 12:13:47.003077   18719 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:13:47.003122   18719 out.go:239] * 
	* 
	W0328 12:13:47.005383   18719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:13:47.014963   18719 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.976309917s)

                                                
                                                
-- stdout --
	* [custom-flannel-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-772000" primary control-plane node in "custom-flannel-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:13:49.571025   18837 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:13:49.571161   18837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:49.571164   18837 out.go:304] Setting ErrFile to fd 2...
	I0328 12:13:49.571167   18837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:13:49.571305   18837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:13:49.572387   18837 out.go:298] Setting JSON to false
	I0328 12:13:49.589002   18837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11601,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:13:49.589067   18837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:13:49.594274   18837 out.go:177] * [custom-flannel-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:13:49.606229   18837 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:13:49.601225   18837 notify.go:220] Checking for updates...
	I0328 12:13:49.614234   18837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:13:49.622208   18837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:13:49.625264   18837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:13:49.629212   18837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:13:49.637199   18837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:13:49.641562   18837 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:13:49.641635   18837 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:13:49.641683   18837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:13:49.646187   18837 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:13:49.652230   18837 start.go:297] selected driver: qemu2
	I0328 12:13:49.652235   18837 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:13:49.652243   18837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:13:49.654583   18837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:13:49.658215   18837 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:13:49.661277   18837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:13:49.661320   18837 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0328 12:13:49.661330   18837 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0328 12:13:49.661371   18837 start.go:340] cluster config:
	{Name:custom-flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:13:49.666058   18837 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:13:49.670259   18837 out.go:177] * Starting "custom-flannel-772000" primary control-plane node in "custom-flannel-772000" cluster
	I0328 12:13:49.674198   18837 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:13:49.674225   18837 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:13:49.674239   18837 cache.go:56] Caching tarball of preloaded images
	I0328 12:13:49.674295   18837 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:13:49.674301   18837 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:13:49.674382   18837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/custom-flannel-772000/config.json ...
	I0328 12:13:49.674394   18837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/custom-flannel-772000/config.json: {Name:mk9eb6b737407973de505fcdc8e82887314b8681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:13:49.674605   18837 start.go:360] acquireMachinesLock for custom-flannel-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:49.674642   18837 start.go:364] duration metric: took 28.291µs to acquireMachinesLock for "custom-flannel-772000"
	I0328 12:13:49.674662   18837 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:49.674704   18837 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:49.683210   18837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:49.700293   18837 start.go:159] libmachine.API.Create for "custom-flannel-772000" (driver="qemu2")
	I0328 12:13:49.700319   18837 client.go:168] LocalClient.Create starting
	I0328 12:13:49.700382   18837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:49.700413   18837 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:49.700423   18837 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:49.700468   18837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:49.700490   18837 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:49.700498   18837 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:49.700894   18837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:49.850357   18837 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:49.957979   18837 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:49.957987   18837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:49.958168   18837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:49.970552   18837 main.go:141] libmachine: STDOUT: 
	I0328 12:13:49.970574   18837 main.go:141] libmachine: STDERR: 
	I0328 12:13:49.970642   18837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2 +20000M
	I0328 12:13:49.981573   18837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:49.981590   18837 main.go:141] libmachine: STDERR: 
	I0328 12:13:49.981610   18837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:49.981617   18837 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:49.981668   18837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:45:d1:59:6f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:49.983539   18837 main.go:141] libmachine: STDOUT: 
	I0328 12:13:49.983555   18837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:49.983575   18837 client.go:171] duration metric: took 283.247917ms to LocalClient.Create
	I0328 12:13:51.985876   18837 start.go:128] duration metric: took 2.311117416s to createHost
	I0328 12:13:51.985977   18837 start.go:83] releasing machines lock for "custom-flannel-772000", held for 2.311296667s
	W0328 12:13:51.986121   18837 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:51.993390   18837 out.go:177] * Deleting "custom-flannel-772000" in qemu2 ...
	W0328 12:13:52.027855   18837 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:52.027895   18837 start.go:728] Will try again in 5 seconds ...
	I0328 12:13:57.030128   18837 start.go:360] acquireMachinesLock for custom-flannel-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:13:57.030601   18837 start.go:364] duration metric: took 377.625µs to acquireMachinesLock for "custom-flannel-772000"
	I0328 12:13:57.030721   18837 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:13:57.031102   18837 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:13:57.036767   18837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:13:57.086309   18837 start.go:159] libmachine.API.Create for "custom-flannel-772000" (driver="qemu2")
	I0328 12:13:57.086367   18837 client.go:168] LocalClient.Create starting
	I0328 12:13:57.086470   18837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:13:57.086537   18837 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:57.086555   18837 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:57.086614   18837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:13:57.086667   18837 main.go:141] libmachine: Decoding PEM data...
	I0328 12:13:57.086680   18837 main.go:141] libmachine: Parsing certificate...
	I0328 12:13:57.087244   18837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:13:57.248422   18837 main.go:141] libmachine: Creating SSH key...
	I0328 12:13:57.443225   18837 main.go:141] libmachine: Creating Disk image...
	I0328 12:13:57.443233   18837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:13:57.443438   18837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:57.455967   18837 main.go:141] libmachine: STDOUT: 
	I0328 12:13:57.455996   18837 main.go:141] libmachine: STDERR: 
	I0328 12:13:57.456057   18837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2 +20000M
	I0328 12:13:57.466900   18837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:13:57.466924   18837 main.go:141] libmachine: STDERR: 
	I0328 12:13:57.466939   18837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:57.466943   18837 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:13:57.466988   18837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:28:42:5f:ad:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/custom-flannel-772000/disk.qcow2
	I0328 12:13:57.468856   18837 main.go:141] libmachine: STDOUT: 
	I0328 12:13:57.468880   18837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:13:57.468896   18837 client.go:171] duration metric: took 382.519708ms to LocalClient.Create
	I0328 12:13:59.471135   18837 start.go:128] duration metric: took 2.439960791s to createHost
	I0328 12:13:59.471221   18837 start.go:83] releasing machines lock for "custom-flannel-772000", held for 2.440564333s
	W0328 12:13:59.471619   18837 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:13:59.487482   18837 out.go:177] 
	W0328 12:13:59.490409   18837 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:13:59.490475   18837 out.go:239] * 
	* 
	W0328 12:13:59.492803   18837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:13:59.502370   18837 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.253797208s)

                                                
                                                
-- stdout --
	* [false-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-772000" primary control-plane node in "false-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:14:02.002307   18959 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:14:02.002434   18959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:02.002437   18959 out.go:304] Setting ErrFile to fd 2...
	I0328 12:14:02.002439   18959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:02.002563   18959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:14:02.003615   18959 out.go:298] Setting JSON to false
	I0328 12:14:02.019834   18959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11614,"bootTime":1711641628,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:14:02.019916   18959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:14:02.026231   18959 out.go:177] * [false-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:14:02.033174   18959 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:14:02.038216   18959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:14:02.033278   18959 notify.go:220] Checking for updates...
	I0328 12:14:02.045887   18959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:14:02.050941   18959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:14:02.054312   18959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:14:02.057170   18959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:14:02.060612   18959 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:14:02.060683   18959 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:14:02.060748   18959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:14:02.064255   18959 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:14:02.071183   18959 start.go:297] selected driver: qemu2
	I0328 12:14:02.071187   18959 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:14:02.071193   18959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:14:02.073441   18959 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:14:02.078203   18959 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:14:02.081215   18959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:14:02.081246   18959 cni.go:84] Creating CNI manager for "false"
	I0328 12:14:02.081280   18959 start.go:340] cluster config:
	{Name:false-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:14:02.085633   18959 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:14:02.094042   18959 out.go:177] * Starting "false-772000" primary control-plane node in "false-772000" cluster
	I0328 12:14:02.098175   18959 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:14:02.098187   18959 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:14:02.098194   18959 cache.go:56] Caching tarball of preloaded images
	I0328 12:14:02.098238   18959 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:14:02.098243   18959 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:14:02.098300   18959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/false-772000/config.json ...
	I0328 12:14:02.098312   18959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/false-772000/config.json: {Name:mkda56aa66c78a1edc8426d742e9b0c5c87813f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:14:02.098508   18959 start.go:360] acquireMachinesLock for false-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:02.098536   18959 start.go:364] duration metric: took 22.833µs to acquireMachinesLock for "false-772000"
	I0328 12:14:02.098547   18959 start.go:93] Provisioning new machine with config: &{Name:false-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:02.098577   18959 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:02.107190   18959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:02.121987   18959 start.go:159] libmachine.API.Create for "false-772000" (driver="qemu2")
	I0328 12:14:02.122015   18959 client.go:168] LocalClient.Create starting
	I0328 12:14:02.122077   18959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:02.122105   18959 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:02.122119   18959 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:02.122164   18959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:02.122185   18959 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:02.122192   18959 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:02.122543   18959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:02.273105   18959 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:02.772914   18959 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:02.772925   18959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:02.773100   18959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:02.785629   18959 main.go:141] libmachine: STDOUT: 
	I0328 12:14:02.785652   18959 main.go:141] libmachine: STDERR: 
	I0328 12:14:02.785722   18959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2 +20000M
	I0328 12:14:02.796621   18959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:02.796638   18959 main.go:141] libmachine: STDERR: 
	I0328 12:14:02.796653   18959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:02.796657   18959 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:02.796698   18959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4d:71:de:db:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:02.798413   18959 main.go:141] libmachine: STDOUT: 
	I0328 12:14:02.798431   18959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:02.798453   18959 client.go:171] duration metric: took 676.425708ms to LocalClient.Create
	I0328 12:14:04.800602   18959 start.go:128] duration metric: took 2.701979917s to createHost
	I0328 12:14:04.800651   18959 start.go:83] releasing machines lock for "false-772000", held for 2.702068458s
	W0328 12:14:04.800713   18959 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:04.806674   18959 out.go:177] * Deleting "false-772000" in qemu2 ...
	W0328 12:14:04.832181   18959 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:04.832197   18959 start.go:728] Will try again in 5 seconds ...
	I0328 12:14:09.834375   18959 start.go:360] acquireMachinesLock for false-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:09.834852   18959 start.go:364] duration metric: took 382.583µs to acquireMachinesLock for "false-772000"
	I0328 12:14:09.835032   18959 start.go:93] Provisioning new machine with config: &{Name:false-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:09.835400   18959 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:09.844783   18959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:09.892612   18959 start.go:159] libmachine.API.Create for "false-772000" (driver="qemu2")
	I0328 12:14:09.892657   18959 client.go:168] LocalClient.Create starting
	I0328 12:14:09.892771   18959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:09.892841   18959 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:09.892857   18959 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:09.892922   18959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:09.892968   18959 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:09.892981   18959 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:09.893490   18959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:10.056735   18959 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:10.159942   18959 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:10.159948   18959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:10.160118   18959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:10.172442   18959 main.go:141] libmachine: STDOUT: 
	I0328 12:14:10.172461   18959 main.go:141] libmachine: STDERR: 
	I0328 12:14:10.172540   18959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2 +20000M
	I0328 12:14:10.183196   18959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:10.183221   18959 main.go:141] libmachine: STDERR: 
	I0328 12:14:10.183237   18959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:10.183242   18959 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:10.183276   18959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:6b:7d:04:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/false-772000/disk.qcow2
	I0328 12:14:10.185002   18959 main.go:141] libmachine: STDOUT: 
	I0328 12:14:10.185021   18959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:10.185039   18959 client.go:171] duration metric: took 292.371709ms to LocalClient.Create
	I0328 12:14:12.187247   18959 start.go:128] duration metric: took 2.351768167s to createHost
	I0328 12:14:12.187290   18959 start.go:83] releasing machines lock for "false-772000", held for 2.352357791s
	W0328 12:14:12.187400   18959 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:12.200661   18959 out.go:177] 
	W0328 12:14:12.204728   18959 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:14:12.204740   18959 out.go:239] * 
	* 
	W0328 12:14:12.205663   18959 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:14:12.216518   18959 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.955900125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-772000" primary control-plane node in "enable-default-cni-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:14:14.509073   19075 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:14:14.509197   19075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:14.509200   19075 out.go:304] Setting ErrFile to fd 2...
	I0328 12:14:14.509203   19075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:14.509321   19075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:14:14.510391   19075 out.go:298] Setting JSON to false
	I0328 12:14:14.526591   19075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11626,"bootTime":1711641628,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:14:14.526679   19075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:14:14.533062   19075 out.go:177] * [enable-default-cni-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:14:14.538981   19075 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:14:14.543013   19075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:14:14.539054   19075 notify.go:220] Checking for updates...
	I0328 12:14:14.553049   19075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:14:14.556982   19075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:14:14.559954   19075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:14:14.562976   19075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:14:14.566434   19075 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:14:14.566509   19075 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:14:14.566548   19075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:14:14.569911   19075 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:14:14.576984   19075 start.go:297] selected driver: qemu2
	I0328 12:14:14.576989   19075 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:14:14.576996   19075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:14:14.579242   19075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:14:14.582975   19075 out.go:177] * Automatically selected the socket_vmnet network
	E0328 12:14:14.586028   19075 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0328 12:14:14.586044   19075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:14:14.586078   19075 cni.go:84] Creating CNI manager for "bridge"
	I0328 12:14:14.586082   19075 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:14:14.586109   19075 start.go:340] cluster config:
	{Name:enable-default-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:14:14.590459   19075 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:14:14.594944   19075 out.go:177] * Starting "enable-default-cni-772000" primary control-plane node in "enable-default-cni-772000" cluster
	I0328 12:14:14.603029   19075 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:14:14.603045   19075 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:14:14.603058   19075 cache.go:56] Caching tarball of preloaded images
	I0328 12:14:14.603125   19075 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:14:14.603131   19075 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:14:14.603207   19075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/enable-default-cni-772000/config.json ...
	I0328 12:14:14.603219   19075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/enable-default-cni-772000/config.json: {Name:mk2b7a4a66885d0595854345bad70ab552befbb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:14:14.603426   19075 start.go:360] acquireMachinesLock for enable-default-cni-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:14.603459   19075 start.go:364] duration metric: took 22.209µs to acquireMachinesLock for "enable-default-cni-772000"
	I0328 12:14:14.603471   19075 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:14.603495   19075 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:14.611993   19075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:14.626536   19075 start.go:159] libmachine.API.Create for "enable-default-cni-772000" (driver="qemu2")
	I0328 12:14:14.626565   19075 client.go:168] LocalClient.Create starting
	I0328 12:14:14.626635   19075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:14.626662   19075 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:14.626676   19075 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:14.626719   19075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:14.626746   19075 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:14.626754   19075 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:14.627118   19075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:14.832365   19075 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:14.952090   19075 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:14.952099   19075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:14.952268   19075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:14.965160   19075 main.go:141] libmachine: STDOUT: 
	I0328 12:14:14.965186   19075 main.go:141] libmachine: STDERR: 
	I0328 12:14:14.965245   19075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2 +20000M
	I0328 12:14:14.976233   19075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:14.976248   19075 main.go:141] libmachine: STDERR: 
	I0328 12:14:14.976269   19075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:14.976273   19075 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:14.976302   19075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b7:25:5d:b7:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:14.978113   19075 main.go:141] libmachine: STDOUT: 
	I0328 12:14:14.978138   19075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:14.978156   19075 client.go:171] duration metric: took 351.581084ms to LocalClient.Create
	I0328 12:14:16.980380   19075 start.go:128] duration metric: took 2.376833875s to createHost
	I0328 12:14:16.980426   19075 start.go:83] releasing machines lock for "enable-default-cni-772000", held for 2.376932s
	W0328 12:14:16.980479   19075 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:16.986822   19075 out.go:177] * Deleting "enable-default-cni-772000" in qemu2 ...
	W0328 12:14:17.017017   19075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:17.017035   19075 start.go:728] Will try again in 5 seconds ...
	I0328 12:14:22.019355   19075 start.go:360] acquireMachinesLock for enable-default-cni-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:22.019891   19075 start.go:364] duration metric: took 411.375µs to acquireMachinesLock for "enable-default-cni-772000"
	I0328 12:14:22.020037   19075 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:22.020272   19075 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:22.031997   19075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:22.080090   19075 start.go:159] libmachine.API.Create for "enable-default-cni-772000" (driver="qemu2")
	I0328 12:14:22.080139   19075 client.go:168] LocalClient.Create starting
	I0328 12:14:22.080248   19075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:22.080323   19075 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:22.080369   19075 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:22.080432   19075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:22.080477   19075 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:22.080489   19075 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:22.081009   19075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:22.242446   19075 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:22.358075   19075 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:22.358083   19075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:22.358274   19075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:22.370552   19075 main.go:141] libmachine: STDOUT: 
	I0328 12:14:22.370572   19075 main.go:141] libmachine: STDERR: 
	I0328 12:14:22.370625   19075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2 +20000M
	I0328 12:14:22.382112   19075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:22.382132   19075 main.go:141] libmachine: STDERR: 
	I0328 12:14:22.382144   19075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:22.382150   19075 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:22.382181   19075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:33:56:85:71:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/enable-default-cni-772000/disk.qcow2
	I0328 12:14:22.384046   19075 main.go:141] libmachine: STDOUT: 
	I0328 12:14:22.384068   19075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:22.384081   19075 client.go:171] duration metric: took 303.933167ms to LocalClient.Create
	I0328 12:14:24.386377   19075 start.go:128] duration metric: took 2.366027333s to createHost
	I0328 12:14:24.386452   19075 start.go:83] releasing machines lock for "enable-default-cni-772000", held for 2.366509333s
	W0328 12:14:24.386765   19075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:24.402435   19075 out.go:177] 
	W0328 12:14:24.406528   19075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:14:24.406555   19075 out.go:239] * 
	* 
	W0328 12:14:24.409228   19075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:14:24.421435   19075 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.001278458s)

                                                
                                                
-- stdout --
	* [flannel-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-772000" primary control-plane node in "flannel-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:14:26.738315   19191 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:14:26.738437   19191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:26.738440   19191 out.go:304] Setting ErrFile to fd 2...
	I0328 12:14:26.738442   19191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:26.738590   19191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:14:26.739733   19191 out.go:298] Setting JSON to false
	I0328 12:14:26.756369   19191 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11638,"bootTime":1711641628,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:14:26.756465   19191 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:14:26.762742   19191 out.go:177] * [flannel-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:14:26.766645   19191 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:14:26.766679   19191 notify.go:220] Checking for updates...
	I0328 12:14:26.775397   19191 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:14:26.780408   19191 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:14:26.784722   19191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:14:26.787684   19191 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:14:26.790697   19191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:14:26.794079   19191 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:14:26.794145   19191 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:14:26.794189   19191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:14:26.797712   19191 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:14:26.804683   19191 start.go:297] selected driver: qemu2
	I0328 12:14:26.804690   19191 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:14:26.804698   19191 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:14:26.807061   19191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:14:26.811658   19191 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:14:26.814783   19191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:14:26.814817   19191 cni.go:84] Creating CNI manager for "flannel"
	I0328 12:14:26.814824   19191 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0328 12:14:26.814858   19191 start.go:340] cluster config:
	{Name:flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:14:26.819326   19191 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:14:26.827688   19191 out.go:177] * Starting "flannel-772000" primary control-plane node in "flannel-772000" cluster
	I0328 12:14:26.831560   19191 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:14:26.831578   19191 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:14:26.831594   19191 cache.go:56] Caching tarball of preloaded images
	I0328 12:14:26.831681   19191 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:14:26.831688   19191 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:14:26.831782   19191 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/flannel-772000/config.json ...
	I0328 12:14:26.831795   19191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/flannel-772000/config.json: {Name:mk70e5f074e4be6df1e3b46d86619d76e362afe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:14:26.832116   19191 start.go:360] acquireMachinesLock for flannel-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:26.832151   19191 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "flannel-772000"
	I0328 12:14:26.832163   19191 start.go:93] Provisioning new machine with config: &{Name:flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:26.832194   19191 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:26.836704   19191 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:26.853266   19191 start.go:159] libmachine.API.Create for "flannel-772000" (driver="qemu2")
	I0328 12:14:26.853290   19191 client.go:168] LocalClient.Create starting
	I0328 12:14:26.853347   19191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:26.853375   19191 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:26.853386   19191 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:26.853431   19191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:26.853453   19191 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:26.853461   19191 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:26.853860   19191 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:27.007879   19191 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:27.214761   19191 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:27.214771   19191 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:27.214949   19191 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:27.227818   19191 main.go:141] libmachine: STDOUT: 
	I0328 12:14:27.227835   19191 main.go:141] libmachine: STDERR: 
	I0328 12:14:27.227885   19191 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2 +20000M
	I0328 12:14:27.238928   19191 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:27.238945   19191 main.go:141] libmachine: STDERR: 
	I0328 12:14:27.238963   19191 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:27.238968   19191 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:27.239007   19191 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:a8:37:6e:1a:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:27.240870   19191 main.go:141] libmachine: STDOUT: 
	I0328 12:14:27.240890   19191 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:27.240909   19191 client.go:171] duration metric: took 387.608792ms to LocalClient.Create
	I0328 12:14:29.243189   19191 start.go:128] duration metric: took 2.410931584s to createHost
	I0328 12:14:29.243259   19191 start.go:83] releasing machines lock for "flannel-772000", held for 2.411071s
	W0328 12:14:29.243328   19191 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:29.260664   19191 out.go:177] * Deleting "flannel-772000" in qemu2 ...
	W0328 12:14:29.288829   19191 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:29.288861   19191 start.go:728] Will try again in 5 seconds ...
	I0328 12:14:34.291102   19191 start.go:360] acquireMachinesLock for flannel-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:34.291598   19191 start.go:364] duration metric: took 402.125µs to acquireMachinesLock for "flannel-772000"
	I0328 12:14:34.291768   19191 start.go:93] Provisioning new machine with config: &{Name:flannel-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:34.292043   19191 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:34.297830   19191 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:34.343046   19191 start.go:159] libmachine.API.Create for "flannel-772000" (driver="qemu2")
	I0328 12:14:34.343100   19191 client.go:168] LocalClient.Create starting
	I0328 12:14:34.343219   19191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:34.343288   19191 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:34.343307   19191 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:34.343384   19191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:34.343427   19191 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:34.343436   19191 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:34.343940   19191 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:34.503189   19191 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:34.646555   19191 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:34.646567   19191 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:34.646767   19191 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:34.660616   19191 main.go:141] libmachine: STDOUT: 
	I0328 12:14:34.660641   19191 main.go:141] libmachine: STDERR: 
	I0328 12:14:34.660716   19191 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2 +20000M
	I0328 12:14:34.672301   19191 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:34.672322   19191 main.go:141] libmachine: STDERR: 
	I0328 12:14:34.672337   19191 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:34.672341   19191 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:34.672383   19191 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:57:dc:f6:a0:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/flannel-772000/disk.qcow2
	I0328 12:14:34.674260   19191 main.go:141] libmachine: STDOUT: 
	I0328 12:14:34.674279   19191 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:34.674291   19191 client.go:171] duration metric: took 331.181667ms to LocalClient.Create
	I0328 12:14:36.676496   19191 start.go:128] duration metric: took 2.38438825s to createHost
	I0328 12:14:36.676559   19191 start.go:83] releasing machines lock for "flannel-772000", held for 2.38490925s
	W0328 12:14:36.676888   19191 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:36.683116   19191 out.go:177] 
	W0328 12:14:36.686656   19191 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:14:36.686694   19191 out.go:239] * 
	* 
	W0328 12:14:36.688174   19191 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:14:36.699544   19191 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.78170925s)

                                                
                                                
-- stdout --
	* [bridge-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-772000" primary control-plane node in "bridge-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:14:39.188114   19309 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:14:39.188239   19309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:39.188243   19309 out.go:304] Setting ErrFile to fd 2...
	I0328 12:14:39.188245   19309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:39.188383   19309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:14:39.189467   19309 out.go:298] Setting JSON to false
	I0328 12:14:39.205614   19309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11651,"bootTime":1711641628,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:14:39.205668   19309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:14:39.213020   19309 out.go:177] * [bridge-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:14:39.220953   19309 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:14:39.224937   19309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:14:39.221038   19309 notify.go:220] Checking for updates...
	I0328 12:14:39.230915   19309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:14:39.233887   19309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:14:39.235014   19309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:14:39.241957   19309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:14:39.245340   19309 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:14:39.245403   19309 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:14:39.245452   19309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:14:39.249872   19309 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:14:39.256971   19309 start.go:297] selected driver: qemu2
	I0328 12:14:39.256976   19309 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:14:39.256981   19309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:14:39.259126   19309 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:14:39.262955   19309 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:14:39.265956   19309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:14:39.265991   19309 cni.go:84] Creating CNI manager for "bridge"
	I0328 12:14:39.265995   19309 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:14:39.266023   19309 start.go:340] cluster config:
	{Name:bridge-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:14:39.270251   19309 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:14:39.280864   19309 out.go:177] * Starting "bridge-772000" primary control-plane node in "bridge-772000" cluster
	I0328 12:14:39.284958   19309 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:14:39.284975   19309 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:14:39.284988   19309 cache.go:56] Caching tarball of preloaded images
	I0328 12:14:39.285043   19309 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:14:39.285049   19309 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:14:39.285129   19309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/bridge-772000/config.json ...
	I0328 12:14:39.285140   19309 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/bridge-772000/config.json: {Name:mk99aeca45484bf73f766193e1b212c89fa20848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:14:39.285336   19309 start.go:360] acquireMachinesLock for bridge-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:39.285364   19309 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "bridge-772000"
	I0328 12:14:39.285376   19309 start.go:93] Provisioning new machine with config: &{Name:bridge-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:39.285400   19309 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:39.292953   19309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:39.307557   19309 start.go:159] libmachine.API.Create for "bridge-772000" (driver="qemu2")
	I0328 12:14:39.307587   19309 client.go:168] LocalClient.Create starting
	I0328 12:14:39.307647   19309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:39.307675   19309 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:39.307686   19309 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:39.307734   19309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:39.307757   19309 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:39.307763   19309 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:39.308110   19309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:39.456429   19309 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:39.520953   19309 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:39.520959   19309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:39.521138   19309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:39.533689   19309 main.go:141] libmachine: STDOUT: 
	I0328 12:14:39.533712   19309 main.go:141] libmachine: STDERR: 
	I0328 12:14:39.533770   19309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2 +20000M
	I0328 12:14:39.544863   19309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:39.544882   19309 main.go:141] libmachine: STDERR: 
	I0328 12:14:39.544898   19309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:39.544904   19309 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:39.544942   19309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:c6:af:59:59:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:39.546730   19309 main.go:141] libmachine: STDOUT: 
	I0328 12:14:39.546755   19309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:39.546782   19309 client.go:171] duration metric: took 239.183209ms to LocalClient.Create
	I0328 12:14:41.549108   19309 start.go:128] duration metric: took 2.26363475s to createHost
	I0328 12:14:41.549199   19309 start.go:83] releasing machines lock for "bridge-772000", held for 2.263798709s
	W0328 12:14:41.549261   19309 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:41.556423   19309 out.go:177] * Deleting "bridge-772000" in qemu2 ...
	W0328 12:14:41.592548   19309 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:41.592588   19309 start.go:728] Will try again in 5 seconds ...
	I0328 12:14:46.594414   19309 start.go:360] acquireMachinesLock for bridge-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:46.594854   19309 start.go:364] duration metric: took 328.084µs to acquireMachinesLock for "bridge-772000"
	I0328 12:14:46.594986   19309 start.go:93] Provisioning new machine with config: &{Name:bridge-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:46.595313   19309 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:46.606202   19309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:46.647020   19309 start.go:159] libmachine.API.Create for "bridge-772000" (driver="qemu2")
	I0328 12:14:46.647075   19309 client.go:168] LocalClient.Create starting
	I0328 12:14:46.647170   19309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:46.647234   19309 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:46.647253   19309 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:46.647320   19309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:46.647361   19309 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:46.647372   19309 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:46.647936   19309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:46.806057   19309 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:46.867420   19309 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:46.867428   19309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:46.867611   19309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:46.880378   19309 main.go:141] libmachine: STDOUT: 
	I0328 12:14:46.880400   19309 main.go:141] libmachine: STDERR: 
	I0328 12:14:46.880462   19309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2 +20000M
	I0328 12:14:46.891778   19309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:46.891799   19309 main.go:141] libmachine: STDERR: 
	I0328 12:14:46.891811   19309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:46.891817   19309 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:46.891854   19309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:d2:92:a6:09:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/bridge-772000/disk.qcow2
	I0328 12:14:46.893712   19309 main.go:141] libmachine: STDOUT: 
	I0328 12:14:46.893729   19309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:46.893743   19309 client.go:171] duration metric: took 246.658542ms to LocalClient.Create
	I0328 12:14:48.895923   19309 start.go:128] duration metric: took 2.300553791s to createHost
	I0328 12:14:48.895961   19309 start.go:83] releasing machines lock for "bridge-772000", held for 2.301056417s
	W0328 12:14:48.896275   19309 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:48.906125   19309 out.go:177] 
	W0328 12:14:48.913332   19309 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:14:48.913360   19309 out.go:239] * 
	* 
	W0328 12:14:48.916572   19309 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:14:48.924955   19309 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-772000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.866807292s)

                                                
                                                
-- stdout --
	* [kubenet-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-772000" primary control-plane node in "kubenet-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:14:51.230672   19423 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:14:51.230818   19423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:51.230825   19423 out.go:304] Setting ErrFile to fd 2...
	I0328 12:14:51.230828   19423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:14:51.230944   19423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:14:51.232021   19423 out.go:298] Setting JSON to false
	I0328 12:14:51.248160   19423 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11663,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:14:51.248218   19423 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:14:51.255408   19423 out.go:177] * [kubenet-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:14:51.263386   19423 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:14:51.264820   19423 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:14:51.263447   19423 notify.go:220] Checking for updates...
	I0328 12:14:51.267378   19423 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:14:51.270365   19423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:14:51.274223   19423 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:14:51.277337   19423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:14:51.280726   19423 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:14:51.280799   19423 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:14:51.280840   19423 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:14:51.282522   19423 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:14:51.289361   19423 start.go:297] selected driver: qemu2
	I0328 12:14:51.289366   19423 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:14:51.289376   19423 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:14:51.291632   19423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:14:51.295207   19423 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:14:51.298413   19423 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:14:51.298447   19423 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0328 12:14:51.298474   19423 start.go:340] cluster config:
	{Name:kubenet-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:14:51.302613   19423 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:14:51.310314   19423 out.go:177] * Starting "kubenet-772000" primary control-plane node in "kubenet-772000" cluster
	I0328 12:14:51.314394   19423 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:14:51.314405   19423 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:14:51.314414   19423 cache.go:56] Caching tarball of preloaded images
	I0328 12:14:51.314465   19423 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:14:51.314470   19423 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:14:51.314523   19423 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kubenet-772000/config.json ...
	I0328 12:14:51.314532   19423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/kubenet-772000/config.json: {Name:mk59b78f88d58b7ff4ddc40d60e601ecc24a674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:14:51.314725   19423 start.go:360] acquireMachinesLock for kubenet-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:51.314752   19423 start.go:364] duration metric: took 22.291µs to acquireMachinesLock for "kubenet-772000"
	I0328 12:14:51.314764   19423 start.go:93] Provisioning new machine with config: &{Name:kubenet-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:51.314794   19423 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:51.322332   19423 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:51.337554   19423 start.go:159] libmachine.API.Create for "kubenet-772000" (driver="qemu2")
	I0328 12:14:51.337575   19423 client.go:168] LocalClient.Create starting
	I0328 12:14:51.337631   19423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:51.337664   19423 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:51.337673   19423 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:51.337716   19423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:51.337736   19423 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:51.337744   19423 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:51.338104   19423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:51.486794   19423 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:51.571897   19423 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:51.571906   19423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:51.572084   19423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:51.584714   19423 main.go:141] libmachine: STDOUT: 
	I0328 12:14:51.584740   19423 main.go:141] libmachine: STDERR: 
	I0328 12:14:51.584797   19423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2 +20000M
	I0328 12:14:51.596076   19423 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:51.596092   19423 main.go:141] libmachine: STDERR: 
	I0328 12:14:51.596111   19423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:51.596118   19423 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:51.596149   19423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d3:59:75:9d:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:51.598075   19423 main.go:141] libmachine: STDOUT: 
	I0328 12:14:51.598090   19423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:51.598107   19423 client.go:171] duration metric: took 260.524083ms to LocalClient.Create
	I0328 12:14:53.600405   19423 start.go:128] duration metric: took 2.285546042s to createHost
	I0328 12:14:53.600485   19423 start.go:83] releasing machines lock for "kubenet-772000", held for 2.285697s
	W0328 12:14:53.600540   19423 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:53.610602   19423 out.go:177] * Deleting "kubenet-772000" in qemu2 ...
	W0328 12:14:53.639595   19423 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:14:53.639626   19423 start.go:728] Will try again in 5 seconds ...
	I0328 12:14:58.641872   19423 start.go:360] acquireMachinesLock for kubenet-772000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:14:58.642306   19423 start.go:364] duration metric: took 335.083µs to acquireMachinesLock for "kubenet-772000"
	I0328 12:14:58.642491   19423 start.go:93] Provisioning new machine with config: &{Name:kubenet-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:14:58.642737   19423 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:14:58.649501   19423 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 12:14:58.696725   19423 start.go:159] libmachine.API.Create for "kubenet-772000" (driver="qemu2")
	I0328 12:14:58.696808   19423 client.go:168] LocalClient.Create starting
	I0328 12:14:58.696971   19423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:14:58.697051   19423 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:58.697071   19423 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:58.697133   19423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:14:58.697174   19423 main.go:141] libmachine: Decoding PEM data...
	I0328 12:14:58.697187   19423 main.go:141] libmachine: Parsing certificate...
	I0328 12:14:58.697699   19423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:14:58.968655   19423 main.go:141] libmachine: Creating SSH key...
	I0328 12:14:59.003475   19423 main.go:141] libmachine: Creating Disk image...
	I0328 12:14:59.003481   19423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:14:59.003655   19423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:59.015880   19423 main.go:141] libmachine: STDOUT: 
	I0328 12:14:59.015904   19423 main.go:141] libmachine: STDERR: 
	I0328 12:14:59.015969   19423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2 +20000M
	I0328 12:14:59.026652   19423 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:14:59.026670   19423 main.go:141] libmachine: STDERR: 
	I0328 12:14:59.026681   19423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:59.026685   19423 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:14:59.026726   19423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:df:89:a4:d8:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/kubenet-772000/disk.qcow2
	I0328 12:14:59.028420   19423 main.go:141] libmachine: STDOUT: 
	I0328 12:14:59.028436   19423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:14:59.028449   19423 client.go:171] duration metric: took 331.611042ms to LocalClient.Create
	I0328 12:15:01.030667   19423 start.go:128] duration metric: took 2.387864333s to createHost
	I0328 12:15:01.030730   19423 start.go:83] releasing machines lock for "kubenet-772000", held for 2.388327541s
	W0328 12:15:01.031064   19423 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:01.038658   19423 out.go:177] 
	W0328 12:15:01.042715   19423 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:01.042745   19423 out.go:239] * 
	* 
	W0328 12:15:01.045351   19423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:01.054609   19423 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.012315167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-648000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-648000" primary control-plane node in "old-k8s-version-648000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-648000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:03.306654   19541 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:03.306760   19541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:03.306763   19541 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:03.306766   19541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:03.306881   19541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:03.307987   19541 out.go:298] Setting JSON to false
	I0328 12:15:03.324407   19541 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11675,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:03.324470   19541 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:03.331505   19541 out.go:177] * [old-k8s-version-648000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:03.338456   19541 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:03.341489   19541 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:03.338510   19541 notify.go:220] Checking for updates...
	I0328 12:15:03.349359   19541 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:03.353227   19541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:03.356480   19541 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:03.359431   19541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:03.362760   19541 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:03.362833   19541 config.go:182] Loaded profile config "stopped-upgrade-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0328 12:15:03.362873   19541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:03.366395   19541 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:15:03.373399   19541 start.go:297] selected driver: qemu2
	I0328 12:15:03.373406   19541 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:15:03.373413   19541 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:03.375780   19541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:15:03.380355   19541 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:15:03.383450   19541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:03.383485   19541 cni.go:84] Creating CNI manager for ""
	I0328 12:15:03.383492   19541 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0328 12:15:03.383516   19541 start.go:340] cluster config:
	{Name:old-k8s-version-648000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:03.387707   19541 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:03.395360   19541 out.go:177] * Starting "old-k8s-version-648000" primary control-plane node in "old-k8s-version-648000" cluster
	I0328 12:15:03.399448   19541 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 12:15:03.399468   19541 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 12:15:03.399479   19541 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:03.399543   19541 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:03.399549   19541 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0328 12:15:03.399621   19541 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/old-k8s-version-648000/config.json ...
	I0328 12:15:03.399635   19541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/old-k8s-version-648000/config.json: {Name:mk9047b75ba1e1d9ed568207113469c47a8d2ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:15:03.399853   19541 start.go:360] acquireMachinesLock for old-k8s-version-648000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:03.399884   19541 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "old-k8s-version-648000"
	I0328 12:15:03.399896   19541 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:03.399922   19541 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:03.408410   19541 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:03.422649   19541 start.go:159] libmachine.API.Create for "old-k8s-version-648000" (driver="qemu2")
	I0328 12:15:03.422681   19541 client.go:168] LocalClient.Create starting
	I0328 12:15:03.422745   19541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:03.422774   19541 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:03.422782   19541 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:03.422824   19541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:03.422844   19541 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:03.422852   19541 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:03.423225   19541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:03.572527   19541 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:03.774521   19541 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:03.774534   19541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:03.774742   19541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:03.787631   19541 main.go:141] libmachine: STDOUT: 
	I0328 12:15:03.787652   19541 main.go:141] libmachine: STDERR: 
	I0328 12:15:03.787705   19541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2 +20000M
	I0328 12:15:03.798677   19541 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:03.798693   19541 main.go:141] libmachine: STDERR: 
	I0328 12:15:03.798710   19541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:03.798714   19541 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:03.798951   19541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:f8:da:fa:94:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:03.801523   19541 main.go:141] libmachine: STDOUT: 
	I0328 12:15:03.801565   19541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:03.801582   19541 client.go:171] duration metric: took 378.891ms to LocalClient.Create
	I0328 12:15:05.802003   19541 start.go:128] duration metric: took 2.402031208s to createHost
	I0328 12:15:05.802083   19541 start.go:83] releasing machines lock for "old-k8s-version-648000", held for 2.402163125s
	W0328 12:15:05.802149   19541 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:05.817188   19541 out.go:177] * Deleting "old-k8s-version-648000" in qemu2 ...
	W0328 12:15:05.846570   19541 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:05.846602   19541 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:10.848873   19541 start.go:360] acquireMachinesLock for old-k8s-version-648000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:10.849427   19541 start.go:364] duration metric: took 412.458µs to acquireMachinesLock for "old-k8s-version-648000"
	I0328 12:15:10.849550   19541 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:10.849830   19541 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:10.858440   19541 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:10.906762   19541 start.go:159] libmachine.API.Create for "old-k8s-version-648000" (driver="qemu2")
	I0328 12:15:10.906827   19541 client.go:168] LocalClient.Create starting
	I0328 12:15:10.906997   19541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:10.907059   19541 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:10.907081   19541 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:10.907154   19541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:10.907196   19541 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:10.907213   19541 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:10.907843   19541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:11.064523   19541 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:11.215317   19541 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:11.215324   19541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:11.215527   19541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:11.228202   19541 main.go:141] libmachine: STDOUT: 
	I0328 12:15:11.228225   19541 main.go:141] libmachine: STDERR: 
	I0328 12:15:11.228293   19541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2 +20000M
	I0328 12:15:11.239341   19541 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:11.239358   19541 main.go:141] libmachine: STDERR: 
	I0328 12:15:11.239371   19541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:11.239384   19541 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:11.239425   19541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4c:bf:eb:2c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:11.241186   19541 main.go:141] libmachine: STDOUT: 
	I0328 12:15:11.241201   19541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:11.241216   19541 client.go:171] duration metric: took 334.362792ms to LocalClient.Create
	I0328 12:15:13.243449   19541 start.go:128] duration metric: took 2.3935545s to createHost
	I0328 12:15:13.243558   19541 start.go:83] releasing machines lock for "old-k8s-version-648000", held for 2.394043625s
	W0328 12:15:13.243988   19541 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:13.258764   19541 out.go:177] 
	W0328 12:15:13.261828   19541 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:13.261869   19541 out.go:239] * 
	* 
	W0328 12:15:13.264414   19541 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:13.273760   19541 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (67.276875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-648000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-648000 create -f testdata/busybox.yaml: exit status 1 (30.66325ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-648000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-648000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (31.220167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (31.585125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-648000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-648000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-648000 describe deploy/metrics-server -n kube-system: exit status 1 (27.088375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-648000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-648000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (32.06925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.201323125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-648000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-648000" primary control-plane node in "old-k8s-version-648000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:15.639261   19583 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:15.639395   19583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:15.639399   19583 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:15.639405   19583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:15.639530   19583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:15.640538   19583 out.go:298] Setting JSON to false
	I0328 12:15:15.656821   19583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11687,"bootTime":1711641628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:15.656892   19583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:15.661146   19583 out.go:177] * [old-k8s-version-648000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:15.668171   19583 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:15.672154   19583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:15.668205   19583 notify.go:220] Checking for updates...
	I0328 12:15:15.677323   19583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:15.680131   19583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:15.683199   19583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:15.686356   19583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:15.689464   19583 config.go:182] Loaded profile config "old-k8s-version-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0328 12:15:15.692064   19583 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 12:15:15.695193   19583 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:15.700068   19583 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:15:15.707164   19583 start.go:297] selected driver: qemu2
	I0328 12:15:15.707171   19583 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:15.707232   19583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:15.709545   19583 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:15.709597   19583 cni.go:84] Creating CNI manager for ""
	I0328 12:15:15.709605   19583 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0328 12:15:15.709636   19583 start.go:340] cluster config:
	{Name:old-k8s-version-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:15.714051   19583 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:15.722162   19583 out.go:177] * Starting "old-k8s-version-648000" primary control-plane node in "old-k8s-version-648000" cluster
	I0328 12:15:15.726081   19583 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 12:15:15.726105   19583 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 12:15:15.726120   19583 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:15.726188   19583 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:15.726198   19583 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0328 12:15:15.726264   19583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/old-k8s-version-648000/config.json ...
	I0328 12:15:15.726574   19583 start.go:360] acquireMachinesLock for old-k8s-version-648000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:15.726602   19583 start.go:364] duration metric: took 21.166µs to acquireMachinesLock for "old-k8s-version-648000"
	I0328 12:15:15.726614   19583 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:15.726620   19583 fix.go:54] fixHost starting: 
	I0328 12:15:15.726735   19583 fix.go:112] recreateIfNeeded on old-k8s-version-648000: state=Stopped err=<nil>
	W0328 12:15:15.726744   19583 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:15.731007   19583 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-648000" ...
	I0328 12:15:15.738124   19583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4c:bf:eb:2c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:15.740211   19583 main.go:141] libmachine: STDOUT: 
	I0328 12:15:15.740233   19583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:15.740262   19583 fix.go:56] duration metric: took 13.64125ms for fixHost
	I0328 12:15:15.740267   19583 start.go:83] releasing machines lock for "old-k8s-version-648000", held for 13.66125ms
	W0328 12:15:15.740273   19583 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:15.740304   19583 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:15.740308   19583 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:20.742424   19583 start.go:360] acquireMachinesLock for old-k8s-version-648000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:20.742857   19583 start.go:364] duration metric: took 342.916µs to acquireMachinesLock for "old-k8s-version-648000"
	I0328 12:15:20.743005   19583 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:20.743029   19583 fix.go:54] fixHost starting: 
	I0328 12:15:20.743767   19583 fix.go:112] recreateIfNeeded on old-k8s-version-648000: state=Stopped err=<nil>
	W0328 12:15:20.743797   19583 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:20.749332   19583 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-648000" ...
	I0328 12:15:20.752771   19583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4c:bf:eb:2c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/old-k8s-version-648000/disk.qcow2
	I0328 12:15:20.763255   19583 main.go:141] libmachine: STDOUT: 
	I0328 12:15:20.763348   19583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:20.763441   19583 fix.go:56] duration metric: took 20.413292ms for fixHost
	I0328 12:15:20.763471   19583 start.go:83] releasing machines lock for "old-k8s-version-648000", held for 20.585958ms
	W0328 12:15:20.763733   19583 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:20.772263   19583 out.go:177] 
	W0328 12:15:20.779511   19583 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:20.779544   19583 out.go:239] * 
	* 
	W0328 12:15:20.782465   19583 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:20.799692   19583 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-648000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (67.025292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (10.637870417s)

                                                
                                                
-- stdout --
	* [no-preload-293000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-293000" primary control-plane node in "no-preload-293000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-293000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:15.998101   19594 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:15.998214   19594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:15.998217   19594 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:15.998219   19594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:15.998359   19594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:15.999437   19594 out.go:298] Setting JSON to false
	I0328 12:15:16.015779   19594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11688,"bootTime":1711641628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:16.015858   19594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:16.018672   19594 out.go:177] * [no-preload-293000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:16.025535   19594 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:16.029674   19594 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:16.025604   19594 notify.go:220] Checking for updates...
	I0328 12:15:16.032699   19594 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:16.035665   19594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:16.038632   19594 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:16.041700   19594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:16.043298   19594 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:16.043376   19594 config.go:182] Loaded profile config "old-k8s-version-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0328 12:15:16.043421   19594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:16.046660   19594 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:15:16.053457   19594 start.go:297] selected driver: qemu2
	I0328 12:15:16.053461   19594 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:15:16.053467   19594 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:16.055658   19594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:15:16.059656   19594 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:15:16.062781   19594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:16.062829   19594 cni.go:84] Creating CNI manager for ""
	I0328 12:15:16.062838   19594 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:16.062842   19594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:15:16.062875   19594 start.go:340] cluster config:
	{Name:no-preload-293000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:16.067294   19594 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.075636   19594 out.go:177] * Starting "no-preload-293000" primary control-plane node in "no-preload-293000" cluster
	I0328 12:15:16.079641   19594 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 12:15:16.079728   19594 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/no-preload-293000/config.json ...
	I0328 12:15:16.079755   19594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/no-preload-293000/config.json: {Name:mkc7ccc1f7eb3719559bd33852e2e80d7870a366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:15:16.079757   19594 cache.go:107] acquiring lock: {Name:mk304b79d606e7d0512c2951bcac95d35ef30546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.079777   19594 cache.go:107] acquiring lock: {Name:mke920e7c174bcf77ca51283537efa2f08d33951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.079790   19594 cache.go:107] acquiring lock: {Name:mk04a964aca71a591776be6cd27912de14514bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.079821   19594 cache.go:107] acquiring lock: {Name:mk41a41ff7017f805e6b103153725e94f44a407a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.079832   19594 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0328 12:15:16.079838   19594 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.5µs
	I0328 12:15:16.079844   19594 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0328 12:15:16.079953   19594 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 12:15:16.079964   19594 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 12:15:16.080008   19594 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 12:15:16.080001   19594 cache.go:107] acquiring lock: {Name:mk0f5a19b690751067a9aba913b3aa73bb9c087d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.080006   19594 cache.go:107] acquiring lock: {Name:mk2e6634cc44b9fcab3c8ac795cedaa60df059ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.080045   19594 start.go:360] acquireMachinesLock for no-preload-293000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:16.080033   19594 cache.go:107] acquiring lock: {Name:mkc3b4d78a2c27f00f14f31797187b66c7dea8ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.080058   19594 cache.go:107] acquiring lock: {Name:mk0a71bd16714770fd494c53926e9cb900a4f273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:16.080087   19594 start.go:364] duration metric: took 35.541µs to acquireMachinesLock for "no-preload-293000"
	I0328 12:15:16.080139   19594 start.go:93] Provisioning new machine with config: &{Name:no-preload-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:16.080168   19594 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:16.088616   19594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:16.080118   19594 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 12:15:16.080217   19594 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 12:15:16.080343   19594 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 12:15:16.084932   19594 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 12:15:16.094981   19594 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 12:15:16.094977   19594 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 12:15:16.095622   19594 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 12:15:16.098621   19594 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 12:15:16.098665   19594 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 12:15:16.098791   19594 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 12:15:16.098883   19594 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 12:15:16.105507   19594 start.go:159] libmachine.API.Create for "no-preload-293000" (driver="qemu2")
	I0328 12:15:16.105530   19594 client.go:168] LocalClient.Create starting
	I0328 12:15:16.105623   19594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:16.105649   19594 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:16.105661   19594 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:16.105708   19594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:16.105732   19594 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:16.105741   19594 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:16.106094   19594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:16.259574   19594 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:16.404180   19594 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:16.404196   19594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:16.404390   19594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:16.416533   19594 main.go:141] libmachine: STDOUT: 
	I0328 12:15:16.416555   19594 main.go:141] libmachine: STDERR: 
	I0328 12:15:16.416610   19594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2 +20000M
	I0328 12:15:16.431465   19594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:16.431484   19594 main.go:141] libmachine: STDERR: 
	I0328 12:15:16.431521   19594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:16.431524   19594 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:16.431563   19594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b1:39:20:f0:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:16.433448   19594 main.go:141] libmachine: STDOUT: 
	I0328 12:15:16.433466   19594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:16.433485   19594 client.go:171] duration metric: took 327.945833ms to LocalClient.Create
	I0328 12:15:18.022568   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 12:15:18.064899   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 12:15:18.122971   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 12:15:18.141269   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 12:15:18.150952   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0328 12:15:18.168802   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 12:15:18.177875   19594 cache.go:162] opening:  /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0328 12:15:18.313655   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0328 12:15:18.313712   19594 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.233818s
	I0328 12:15:18.313740   19594 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0328 12:15:18.433748   19594 start.go:128] duration metric: took 2.353525666s to createHost
	I0328 12:15:18.433809   19594 start.go:83] releasing machines lock for "no-preload-293000", held for 2.353636709s
	W0328 12:15:18.433863   19594 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:18.448884   19594 out.go:177] * Deleting "no-preload-293000" in qemu2 ...
	W0328 12:15:18.480938   19594 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:18.480967   19594 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:21.051056   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0328 12:15:21.051073   19594 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 4.971254042s
	I0328 12:15:21.051079   19594 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0328 12:15:22.074031   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0328 12:15:22.074044   19594 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.994069625s
	I0328 12:15:22.074053   19594 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0328 12:15:22.716067   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0328 12:15:22.716134   19594 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 6.636290375s
	I0328 12:15:22.716173   19594 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0328 12:15:22.769395   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0328 12:15:22.769441   19594 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 6.689412667s
	I0328 12:15:22.769468   19594 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0328 12:15:22.771678   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0328 12:15:22.771742   19594 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 6.691739167s
	I0328 12:15:22.771762   19594 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0328 12:15:23.481327   19594 start.go:360] acquireMachinesLock for no-preload-293000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:24.029779   19594 start.go:364] duration metric: took 548.316041ms to acquireMachinesLock for "no-preload-293000"
	I0328 12:15:24.029899   19594 start.go:93] Provisioning new machine with config: &{Name:no-preload-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:24.030163   19594 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:24.045842   19594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:24.094018   19594 start.go:159] libmachine.API.Create for "no-preload-293000" (driver="qemu2")
	I0328 12:15:24.094091   19594 client.go:168] LocalClient.Create starting
	I0328 12:15:24.094257   19594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:24.094323   19594 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:24.094345   19594 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:24.094418   19594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:24.094459   19594 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:24.094472   19594 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:24.094939   19594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:24.285415   19594 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:24.529969   19594 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:24.529983   19594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:24.530189   19594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:24.543314   19594 main.go:141] libmachine: STDOUT: 
	I0328 12:15:24.543332   19594 main.go:141] libmachine: STDERR: 
	I0328 12:15:24.543402   19594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2 +20000M
	I0328 12:15:24.554608   19594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:24.554622   19594 main.go:141] libmachine: STDERR: 
	I0328 12:15:24.554639   19594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:24.554648   19594 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:24.554693   19594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:50:ed:ad:88:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:24.556515   19594 main.go:141] libmachine: STDOUT: 
	I0328 12:15:24.556529   19594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:24.556543   19594 client.go:171] duration metric: took 462.427666ms to LocalClient.Create
	I0328 12:15:26.069205   19594 cache.go:157] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0328 12:15:26.069275   19594 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 9.989330167s
	I0328 12:15:26.069308   19594 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0328 12:15:26.069349   19594 cache.go:87] Successfully saved all images to host disk.
	I0328 12:15:26.558825   19594 start.go:128] duration metric: took 2.528602458s to createHost
	I0328 12:15:26.558886   19594 start.go:83] releasing machines lock for "no-preload-293000", held for 2.529047584s
	W0328 12:15:26.559153   19594 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:26.568660   19594 out.go:177] 
	W0328 12:15:26.573801   19594 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:26.573830   19594 out.go:239] * 
	* 
	W0328 12:15:26.576449   19594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:26.588569   19594 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (67.486083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-648000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (33.076291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-648000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-648000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-648000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.838792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-648000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-648000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (31.125708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-648000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (30.822584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-648000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-648000 --alsologtostderr -v=1: exit status 83 (58.73625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-648000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:21.075214   19656 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:21.075943   19656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:21.075946   19656 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:21.075949   19656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:21.076076   19656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:21.076255   19656 out.go:298] Setting JSON to false
	I0328 12:15:21.076264   19656 mustload.go:65] Loading cluster: old-k8s-version-648000
	I0328 12:15:21.076436   19656 config.go:182] Loaded profile config "old-k8s-version-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0328 12:15:21.092297   19656 out.go:177] * The control-plane node old-k8s-version-648000 host is not running: state=Stopped
	I0328 12:15:21.100753   19656 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-648000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-648000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (31.216208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (31.175292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.954334334s)

                                                
                                                
-- stdout --
	* [embed-certs-778000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-778000" primary control-plane node in "embed-certs-778000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-778000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:21.570784   19679 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:21.570901   19679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:21.570904   19679 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:21.570907   19679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:21.571056   19679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:21.572150   19679 out.go:298] Setting JSON to false
	I0328 12:15:21.588769   19679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11693,"bootTime":1711641628,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:21.588879   19679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:21.593564   19679 out.go:177] * [embed-certs-778000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:21.605567   19679 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:21.601534   19679 notify.go:220] Checking for updates...
	I0328 12:15:21.612513   19679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:21.620413   19679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:21.628568   19679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:21.635471   19679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:21.642528   19679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:21.646837   19679 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:21.646899   19679 config.go:182] Loaded profile config "no-preload-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0328 12:15:21.646951   19679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:21.650397   19679 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:15:21.657489   19679 start.go:297] selected driver: qemu2
	I0328 12:15:21.657495   19679 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:15:21.657502   19679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:21.659911   19679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:15:21.664492   19679 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:15:21.668575   19679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:21.668607   19679 cni.go:84] Creating CNI manager for ""
	I0328 12:15:21.668615   19679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:21.668619   19679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:15:21.668648   19679 start.go:340] cluster config:
	{Name:embed-certs-778000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-778000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:21.673210   19679 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:21.681505   19679 out.go:177] * Starting "embed-certs-778000" primary control-plane node in "embed-certs-778000" cluster
	I0328 12:15:21.684535   19679 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:15:21.684549   19679 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:15:21.684557   19679 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:21.684623   19679 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:21.684629   19679 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:15:21.684684   19679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/embed-certs-778000/config.json ...
	I0328 12:15:21.684696   19679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/embed-certs-778000/config.json: {Name:mk25d57f4ccbc9072bc6df379da2d6aabc7bb72f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:15:21.684913   19679 start.go:360] acquireMachinesLock for embed-certs-778000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:21.684952   19679 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "embed-certs-778000"
	I0328 12:15:21.684968   19679 start.go:93] Provisioning new machine with config: &{Name:embed-certs-778000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-778000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:21.684996   19679 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:21.693553   19679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:21.709849   19679 start.go:159] libmachine.API.Create for "embed-certs-778000" (driver="qemu2")
	I0328 12:15:21.709870   19679 client.go:168] LocalClient.Create starting
	I0328 12:15:21.709929   19679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:21.709961   19679 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:21.709975   19679 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:21.710022   19679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:21.710043   19679 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:21.710048   19679 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:21.710409   19679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:21.857766   19679 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:22.001039   19679 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:22.001047   19679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:22.001221   19679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:22.014155   19679 main.go:141] libmachine: STDOUT: 
	I0328 12:15:22.014182   19679 main.go:141] libmachine: STDERR: 
	I0328 12:15:22.014242   19679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2 +20000M
	I0328 12:15:22.025398   19679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:22.025414   19679 main.go:141] libmachine: STDERR: 
	I0328 12:15:22.025428   19679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:22.025432   19679 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:22.025461   19679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:77:f1:49:bc:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:22.027241   19679 main.go:141] libmachine: STDOUT: 
	I0328 12:15:22.027266   19679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:22.027287   19679 client.go:171] duration metric: took 317.407959ms to LocalClient.Create
	I0328 12:15:24.029564   19679 start.go:128] duration metric: took 2.344509917s to createHost
	I0328 12:15:24.029638   19679 start.go:83] releasing machines lock for "embed-certs-778000", held for 2.344648375s
	W0328 12:15:24.029686   19679 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:24.058848   19679 out.go:177] * Deleting "embed-certs-778000" in qemu2 ...
	W0328 12:15:24.081353   19679 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:24.081373   19679 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:29.083624   19679 start.go:360] acquireMachinesLock for embed-certs-778000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:29.084061   19679 start.go:364] duration metric: took 332.583µs to acquireMachinesLock for "embed-certs-778000"
	I0328 12:15:29.084195   19679 start.go:93] Provisioning new machine with config: &{Name:embed-certs-778000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-778000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:29.084531   19679 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:29.093230   19679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:29.141073   19679 start.go:159] libmachine.API.Create for "embed-certs-778000" (driver="qemu2")
	I0328 12:15:29.141126   19679 client.go:168] LocalClient.Create starting
	I0328 12:15:29.141215   19679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:29.141266   19679 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:29.141286   19679 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:29.141350   19679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:29.141378   19679 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:29.141388   19679 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:29.141887   19679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:29.317175   19679 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:29.415802   19679 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:29.415811   19679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:29.415977   19679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:29.428122   19679 main.go:141] libmachine: STDOUT: 
	I0328 12:15:29.428142   19679 main.go:141] libmachine: STDERR: 
	I0328 12:15:29.428191   19679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2 +20000M
	I0328 12:15:29.438881   19679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:29.438907   19679 main.go:141] libmachine: STDERR: 
	I0328 12:15:29.438927   19679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:29.438932   19679 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:29.438974   19679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:54:29:1b:01:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:29.440717   19679 main.go:141] libmachine: STDOUT: 
	I0328 12:15:29.440730   19679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:29.440745   19679 client.go:171] duration metric: took 299.6095ms to LocalClient.Create
	I0328 12:15:31.442951   19679 start.go:128] duration metric: took 2.358360375s to createHost
	I0328 12:15:31.443071   19679 start.go:83] releasing machines lock for "embed-certs-778000", held for 2.358949208s
	W0328 12:15:31.443622   19679 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-778000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-778000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:31.450887   19679 out.go:177] 
	W0328 12:15:31.465047   19679 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:31.465099   19679 out.go:239] * 
	* 
	W0328 12:15:31.467638   19679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:31.477429   19679 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (65.066875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-293000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-293000 create -f testdata/busybox.yaml: exit status 1 (29.042375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-293000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-293000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (31.075833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (30.903666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-293000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-293000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-293000 describe deploy/metrics-server -n kube-system: exit status 1 (26.651375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-293000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-293000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (31.259625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (6.545412084s)

                                                
                                                
-- stdout --
	* [no-preload-293000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-293000" primary control-plane node in "no-preload-293000" cluster
	* Restarting existing qemu2 VM for "no-preload-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:30.024157   19732 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:30.024313   19732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:30.024318   19732 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:30.024320   19732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:30.024449   19732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:30.025565   19732 out.go:298] Setting JSON to false
	I0328 12:15:30.041654   19732 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11702,"bootTime":1711641628,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:30.041728   19732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:30.046895   19732 out.go:177] * [no-preload-293000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:30.053936   19732 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:30.057769   19732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:30.054003   19732 notify.go:220] Checking for updates...
	I0328 12:15:30.064860   19732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:30.067853   19732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:30.070857   19732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:30.073874   19732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:30.077158   19732 config.go:182] Loaded profile config "no-preload-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0328 12:15:30.077406   19732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:30.080865   19732 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:15:30.087871   19732 start.go:297] selected driver: qemu2
	I0328 12:15:30.087879   19732 start.go:901] validating driver "qemu2" against &{Name:no-preload-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:30.087957   19732 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:30.090391   19732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:30.090436   19732 cni.go:84] Creating CNI manager for ""
	I0328 12:15:30.090443   19732 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:30.090475   19732 start.go:340] cluster config:
	{Name:no-preload-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-293000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:30.094848   19732 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.101749   19732 out.go:177] * Starting "no-preload-293000" primary control-plane node in "no-preload-293000" cluster
	I0328 12:15:30.105861   19732 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 12:15:30.105941   19732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/no-preload-293000/config.json ...
	I0328 12:15:30.105976   19732 cache.go:107] acquiring lock: {Name:mk304b79d606e7d0512c2951bcac95d35ef30546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.105993   19732 cache.go:107] acquiring lock: {Name:mk0a71bd16714770fd494c53926e9cb900a4f273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106008   19732 cache.go:107] acquiring lock: {Name:mk04a964aca71a591776be6cd27912de14514bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106051   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0328 12:15:30.106057   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0328 12:15:30.106059   19732 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.708µs
	I0328 12:15:30.106063   19732 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 79.042µs
	I0328 12:15:30.106108   19732 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0328 12:15:30.106065   19732 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0328 12:15:30.106073   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0328 12:15:30.106118   19732 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 139µs
	I0328 12:15:30.106123   19732 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0328 12:15:30.106073   19732 cache.go:107] acquiring lock: {Name:mkc3b4d78a2c27f00f14f31797187b66c7dea8ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106082   19732 cache.go:107] acquiring lock: {Name:mk41a41ff7017f805e6b103153725e94f44a407a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106161   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0328 12:15:30.106167   19732 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 94.375µs
	I0328 12:15:30.106171   19732 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0328 12:15:30.106084   19732 cache.go:107] acquiring lock: {Name:mk2e6634cc44b9fcab3c8ac795cedaa60df059ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106094   19732 cache.go:107] acquiring lock: {Name:mke920e7c174bcf77ca51283537efa2f08d33951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106138   19732 cache.go:107] acquiring lock: {Name:mk0f5a19b690751067a9aba913b3aa73bb9c087d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:30.106221   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0328 12:15:30.106226   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0328 12:15:30.106227   19732 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 145.875µs
	I0328 12:15:30.106232   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0328 12:15:30.106234   19732 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0328 12:15:30.106235   19732 cache.go:115] /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0328 12:15:30.106236   19732 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 142.833µs
	I0328 12:15:30.106240   19732 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 114.792µs
	I0328 12:15:30.106254   19732 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0328 12:15:30.106233   19732 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 149.833µs
	I0328 12:15:30.106259   19732 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0328 12:15:30.106241   19732 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0328 12:15:30.106262   19732 cache.go:87] Successfully saved all images to host disk.
	I0328 12:15:30.106375   19732 start.go:360] acquireMachinesLock for no-preload-293000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:31.443269   19732 start.go:364] duration metric: took 1.336835125s to acquireMachinesLock for "no-preload-293000"
	I0328 12:15:31.443445   19732 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:31.443480   19732 fix.go:54] fixHost starting: 
	I0328 12:15:31.444175   19732 fix.go:112] recreateIfNeeded on no-preload-293000: state=Stopped err=<nil>
	W0328 12:15:31.444210   19732 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:31.450925   19732 out.go:177] * Restarting existing qemu2 VM for "no-preload-293000" ...
	I0328 12:15:31.465097   19732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:50:ed:ad:88:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:31.475515   19732 main.go:141] libmachine: STDOUT: 
	I0328 12:15:31.475582   19732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:31.475685   19732 fix.go:56] duration metric: took 32.21325ms for fixHost
	I0328 12:15:31.475704   19732 start.go:83] releasing machines lock for "no-preload-293000", held for 32.39975ms
	W0328 12:15:31.475736   19732 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:31.475906   19732 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:31.475925   19732 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:36.477167   19732 start.go:360] acquireMachinesLock for no-preload-293000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:36.477629   19732 start.go:364] duration metric: took 330.958µs to acquireMachinesLock for "no-preload-293000"
	I0328 12:15:36.477758   19732 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:36.477823   19732 fix.go:54] fixHost starting: 
	I0328 12:15:36.478779   19732 fix.go:112] recreateIfNeeded on no-preload-293000: state=Stopped err=<nil>
	W0328 12:15:36.478809   19732 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:36.488316   19732 out.go:177] * Restarting existing qemu2 VM for "no-preload-293000" ...
	I0328 12:15:36.491438   19732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:50:ed:ad:88:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/no-preload-293000/disk.qcow2
	I0328 12:15:36.501226   19732 main.go:141] libmachine: STDOUT: 
	I0328 12:15:36.501304   19732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:36.501390   19732 fix.go:56] duration metric: took 23.609125ms for fixHost
	I0328 12:15:36.501410   19732 start.go:83] releasing machines lock for "no-preload-293000", held for 23.741916ms
	W0328 12:15:36.501633   19732 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:36.509303   19732 out.go:177] 
	W0328 12:15:36.513392   19732 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:36.513420   19732 out.go:239] * 
	* 
	W0328 12:15:36.515950   19732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:36.525271   19732 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-293000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (67.177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-778000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-778000 create -f testdata/busybox.yaml: exit status 1 (30.086333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-778000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-778000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.81775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-778000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-778000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-778000 describe deploy/metrics-server -n kube-system: exit status 1 (27.179167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-778000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-778000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.145708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (6.220602791s)

                                                
                                                
-- stdout --
	* [embed-certs-778000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-778000" primary control-plane node in "embed-certs-778000" cluster
	* Restarting existing qemu2 VM for "embed-certs-778000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-778000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:33.798207   19766 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:33.798352   19766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:33.798355   19766 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:33.798358   19766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:33.798495   19766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:33.799522   19766 out.go:298] Setting JSON to false
	I0328 12:15:33.815436   19766 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11705,"bootTime":1711641628,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:33.815491   19766 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:33.820337   19766 out.go:177] * [embed-certs-778000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:33.827362   19766 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:33.827412   19766 notify.go:220] Checking for updates...
	I0328 12:15:33.835289   19766 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:33.839346   19766 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:33.842344   19766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:33.845368   19766 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:33.848299   19766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:33.851606   19766 config.go:182] Loaded profile config "embed-certs-778000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:33.851877   19766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:33.856328   19766 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:15:33.863303   19766 start.go:297] selected driver: qemu2
	I0328 12:15:33.863308   19766 start.go:901] validating driver "qemu2" against &{Name:embed-certs-778000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-778000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:33.863371   19766 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:33.865592   19766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:33.865642   19766 cni.go:84] Creating CNI manager for ""
	I0328 12:15:33.865649   19766 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:33.865675   19766 start.go:340] cluster config:
	{Name:embed-certs-778000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-778000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:33.869970   19766 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:33.878282   19766 out.go:177] * Starting "embed-certs-778000" primary control-plane node in "embed-certs-778000" cluster
	I0328 12:15:33.882316   19766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:15:33.882333   19766 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:15:33.882340   19766 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:33.882395   19766 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:33.882400   19766 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:15:33.882474   19766 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/embed-certs-778000/config.json ...
	I0328 12:15:33.882928   19766 start.go:360] acquireMachinesLock for embed-certs-778000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:33.882954   19766 start.go:364] duration metric: took 20.584µs to acquireMachinesLock for "embed-certs-778000"
	I0328 12:15:33.882963   19766 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:33.882969   19766 fix.go:54] fixHost starting: 
	I0328 12:15:33.883084   19766 fix.go:112] recreateIfNeeded on embed-certs-778000: state=Stopped err=<nil>
	W0328 12:15:33.883093   19766 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:33.887299   19766 out.go:177] * Restarting existing qemu2 VM for "embed-certs-778000" ...
	I0328 12:15:33.895360   19766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:54:29:1b:01:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:33.897361   19766 main.go:141] libmachine: STDOUT: 
	I0328 12:15:33.897386   19766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:33.897418   19766 fix.go:56] duration metric: took 14.447834ms for fixHost
	I0328 12:15:33.897424   19766 start.go:83] releasing machines lock for "embed-certs-778000", held for 14.465084ms
	W0328 12:15:33.897429   19766 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:33.897464   19766 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:33.897470   19766 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:38.899748   19766 start.go:360] acquireMachinesLock for embed-certs-778000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:39.904499   19766 start.go:364] duration metric: took 1.004557292s to acquireMachinesLock for "embed-certs-778000"
	I0328 12:15:39.904574   19766 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:39.904593   19766 fix.go:54] fixHost starting: 
	I0328 12:15:39.905351   19766 fix.go:112] recreateIfNeeded on embed-certs-778000: state=Stopped err=<nil>
	W0328 12:15:39.905379   19766 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:39.917800   19766 out.go:177] * Restarting existing qemu2 VM for "embed-certs-778000" ...
	I0328 12:15:39.930630   19766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:54:29:1b:01:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/embed-certs-778000/disk.qcow2
	I0328 12:15:39.941581   19766 main.go:141] libmachine: STDOUT: 
	I0328 12:15:39.941659   19766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:39.941745   19766 fix.go:56] duration metric: took 37.153334ms for fixHost
	I0328 12:15:39.941770   19766 start.go:83] releasing machines lock for "embed-certs-778000", held for 37.2305ms
	W0328 12:15:39.941958   19766 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-778000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-778000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:39.949754   19766 out.go:177] 
	W0328 12:15:39.955835   19766 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:39.955858   19766 out.go:239] * 
	* 
	W0328 12:15:39.957950   19766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:39.973761   19766 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-778000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (62.842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-293000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (33.383208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-293000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.207375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-293000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (31.368084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-293000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (30.9855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-293000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-293000 --alsologtostderr -v=1: exit status 83 (42.528083ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-293000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-293000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:36.804401   19787 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:36.804542   19787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:36.804546   19787 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:36.804548   19787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:36.804662   19787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:36.804914   19787 out.go:298] Setting JSON to false
	I0328 12:15:36.804926   19787 mustload.go:65] Loading cluster: no-preload-293000
	I0328 12:15:36.805111   19787 config.go:182] Loaded profile config "no-preload-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0328 12:15:36.809847   19787 out.go:177] * The control-plane node no-preload-293000 host is not running: state=Stopped
	I0328 12:15:36.813711   19787 out.go:177]   To start a cluster, run: "minikube start -p no-preload-293000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-293000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (30.792209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (30.816166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.913990125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-925000" primary control-plane node in "default-k8s-diff-port-925000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:37.520078   19822 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:37.520208   19822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:37.520211   19822 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:37.520213   19822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:37.520330   19822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:37.521387   19822 out.go:298] Setting JSON to false
	I0328 12:15:37.537794   19822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11709,"bootTime":1711641628,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:37.537849   19822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:37.542714   19822 out.go:177] * [default-k8s-diff-port-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:37.551184   19822 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:37.553710   19822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:37.551247   19822 notify.go:220] Checking for updates...
	I0328 12:15:37.557704   19822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:37.561588   19822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:37.564708   19822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:37.567740   19822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:37.570991   19822 config.go:182] Loaded profile config "embed-certs-778000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:37.571053   19822 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:37.571108   19822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:37.578729   19822 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:15:37.585663   19822 start.go:297] selected driver: qemu2
	I0328 12:15:37.585669   19822 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:15:37.585675   19822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:37.587913   19822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 12:15:37.590694   19822 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:15:37.593823   19822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:37.593860   19822 cni.go:84] Creating CNI manager for ""
	I0328 12:15:37.593868   19822 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:37.593873   19822 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:15:37.593906   19822 start.go:340] cluster config:
	{Name:default-k8s-diff-port-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:37.598613   19822 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:37.606706   19822 out.go:177] * Starting "default-k8s-diff-port-925000" primary control-plane node in "default-k8s-diff-port-925000" cluster
	I0328 12:15:37.609676   19822 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:15:37.609694   19822 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:15:37.609704   19822 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:37.609762   19822 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:37.609768   19822 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:15:37.609822   19822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/default-k8s-diff-port-925000/config.json ...
	I0328 12:15:37.609833   19822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/default-k8s-diff-port-925000/config.json: {Name:mka93d8b3a3aa3b979974e69e91dd8e4cd0c5917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:15:37.610054   19822 start.go:360] acquireMachinesLock for default-k8s-diff-port-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:37.610090   19822 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "default-k8s-diff-port-925000"
	I0328 12:15:37.610105   19822 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:37.610154   19822 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:37.617710   19822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:37.634691   19822 start.go:159] libmachine.API.Create for "default-k8s-diff-port-925000" (driver="qemu2")
	I0328 12:15:37.634717   19822 client.go:168] LocalClient.Create starting
	I0328 12:15:37.634769   19822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:37.634798   19822 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:37.634812   19822 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:37.634857   19822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:37.634886   19822 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:37.634893   19822 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:37.635255   19822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:37.786353   19822 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:37.876727   19822 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:37.876736   19822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:37.876919   19822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:37.889094   19822 main.go:141] libmachine: STDOUT: 
	I0328 12:15:37.889115   19822 main.go:141] libmachine: STDERR: 
	I0328 12:15:37.889172   19822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2 +20000M
	I0328 12:15:37.900123   19822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:37.900136   19822 main.go:141] libmachine: STDERR: 
	I0328 12:15:37.900152   19822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:37.900165   19822 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:37.900195   19822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:06:82:ff:19:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:37.901906   19822 main.go:141] libmachine: STDOUT: 
	I0328 12:15:37.901921   19822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:37.901938   19822 client.go:171] duration metric: took 267.211125ms to LocalClient.Create
	I0328 12:15:39.904220   19822 start.go:128] duration metric: took 2.294003833s to createHost
	I0328 12:15:39.904290   19822 start.go:83] releasing machines lock for "default-k8s-diff-port-925000", held for 2.294158792s
	W0328 12:15:39.904345   19822 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:39.925809   19822 out.go:177] * Deleting "default-k8s-diff-port-925000" in qemu2 ...
	W0328 12:15:39.984177   19822 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:39.984221   19822 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:44.984616   19822 start.go:360] acquireMachinesLock for default-k8s-diff-port-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:44.985055   19822 start.go:364] duration metric: took 324.25µs to acquireMachinesLock for "default-k8s-diff-port-925000"
	I0328 12:15:44.985193   19822 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:44.985546   19822 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:45.004287   19822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:45.053435   19822 start.go:159] libmachine.API.Create for "default-k8s-diff-port-925000" (driver="qemu2")
	I0328 12:15:45.053491   19822 client.go:168] LocalClient.Create starting
	I0328 12:15:45.053612   19822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:45.053679   19822 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:45.053704   19822 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:45.053763   19822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:45.053806   19822 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:45.053817   19822 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:45.054362   19822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:45.213266   19822 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:45.332539   19822 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:45.332548   19822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:45.332731   19822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:45.344980   19822 main.go:141] libmachine: STDOUT: 
	I0328 12:15:45.345000   19822 main.go:141] libmachine: STDERR: 
	I0328 12:15:45.345065   19822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2 +20000M
	I0328 12:15:45.355772   19822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:45.355787   19822 main.go:141] libmachine: STDERR: 
	I0328 12:15:45.355803   19822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:45.355815   19822 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:45.355855   19822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e4:a6:d7:dc:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:45.357612   19822 main.go:141] libmachine: STDOUT: 
	I0328 12:15:45.357630   19822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:45.357646   19822 client.go:171] duration metric: took 304.145042ms to LocalClient.Create
	I0328 12:15:47.359846   19822 start.go:128] duration metric: took 2.37423525s to createHost
	I0328 12:15:47.359945   19822 start.go:83] releasing machines lock for "default-k8s-diff-port-925000", held for 2.374836084s
	W0328 12:15:47.360318   19822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:47.371885   19822 out.go:177] 
	W0328 12:15:47.379034   19822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:47.379060   19822 out.go:239] * 
	* 
	W0328 12:15:47.381903   19822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:47.389036   19822 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (67.016084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-778000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (32.978291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-778000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-778000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-778000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.22425ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-778000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-778000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (30.846042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-778000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.293584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-778000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-778000 --alsologtostderr -v=1: exit status 83 (45.111209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-778000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-778000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:40.248592   19846 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:40.248735   19846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:40.248739   19846 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:40.248749   19846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:40.248879   19846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:40.249112   19846 out.go:298] Setting JSON to false
	I0328 12:15:40.249120   19846 mustload.go:65] Loading cluster: embed-certs-778000
	I0328 12:15:40.249314   19846 config.go:182] Loaded profile config "embed-certs-778000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:40.253529   19846 out.go:177] * The control-plane node embed-certs-778000 host is not running: state=Stopped
	I0328 12:15:40.258321   19846 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-778000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-778000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.13525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (31.085583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-778000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.811933042s)

                                                
                                                
-- stdout --
	* [newest-cni-644000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-644000" primary control-plane node in "newest-cni-644000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:40.723520   19869 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:40.723632   19869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:40.723636   19869 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:40.723639   19869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:40.723779   19869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:40.724856   19869 out.go:298] Setting JSON to false
	I0328 12:15:40.741046   19869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11712,"bootTime":1711641628,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:40.741107   19869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:40.746453   19869 out.go:177] * [newest-cni-644000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:40.753563   19869 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:40.757334   19869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:40.753590   19869 notify.go:220] Checking for updates...
	I0328 12:15:40.763372   19869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:40.766351   19869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:40.769402   19869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:40.772443   19869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:40.775791   19869 config.go:182] Loaded profile config "default-k8s-diff-port-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:40.775857   19869 config.go:182] Loaded profile config "multinode-652000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:40.775910   19869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:40.780396   19869 out.go:177] * Using the qemu2 driver based on user configuration
	I0328 12:15:40.786392   19869 start.go:297] selected driver: qemu2
	I0328 12:15:40.786399   19869 start.go:901] validating driver "qemu2" against <nil>
	I0328 12:15:40.786410   19869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:40.788759   19869 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0328 12:15:40.788785   19869 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0328 12:15:40.797398   19869 out.go:177] * Automatically selected the socket_vmnet network
	I0328 12:15:40.800449   19869 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0328 12:15:40.800493   19869 cni.go:84] Creating CNI manager for ""
	I0328 12:15:40.800502   19869 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:40.800507   19869 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 12:15:40.800532   19869 start.go:340] cluster config:
	{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:40.805227   19869 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:40.812421   19869 out.go:177] * Starting "newest-cni-644000" primary control-plane node in "newest-cni-644000" cluster
	I0328 12:15:40.816303   19869 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 12:15:40.816320   19869 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0328 12:15:40.816331   19869 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:40.816415   19869 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:40.816421   19869 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0328 12:15:40.816520   19869 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/newest-cni-644000/config.json ...
	I0328 12:15:40.816535   19869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/newest-cni-644000/config.json: {Name:mkb6b91321128707a45138772c14a6cc8c7c64ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 12:15:40.816841   19869 start.go:360] acquireMachinesLock for newest-cni-644000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:40.816878   19869 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "newest-cni-644000"
	I0328 12:15:40.816893   19869 start.go:93] Provisioning new machine with config: &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:40.816943   19869 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:40.821444   19869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:40.840045   19869 start.go:159] libmachine.API.Create for "newest-cni-644000" (driver="qemu2")
	I0328 12:15:40.840067   19869 client.go:168] LocalClient.Create starting
	I0328 12:15:40.840122   19869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:40.840152   19869 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:40.840166   19869 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:40.840214   19869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:40.840238   19869 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:40.840246   19869 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:40.840601   19869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:40.991076   19869 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:41.086428   19869 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:41.086435   19869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:41.086624   19869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:41.099015   19869 main.go:141] libmachine: STDOUT: 
	I0328 12:15:41.099038   19869 main.go:141] libmachine: STDERR: 
	I0328 12:15:41.099093   19869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2 +20000M
	I0328 12:15:41.109871   19869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:41.109887   19869 main.go:141] libmachine: STDERR: 
	I0328 12:15:41.109898   19869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:41.109903   19869 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:41.109929   19869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:19:0d:70:31:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:41.111624   19869 main.go:141] libmachine: STDOUT: 
	I0328 12:15:41.111641   19869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:41.111660   19869 client.go:171] duration metric: took 271.583625ms to LocalClient.Create
	I0328 12:15:43.113999   19869 start.go:128] duration metric: took 2.296984959s to createHost
	I0328 12:15:43.114101   19869 start.go:83] releasing machines lock for "newest-cni-644000", held for 2.297185125s
	W0328 12:15:43.114158   19869 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:43.120217   19869 out.go:177] * Deleting "newest-cni-644000" in qemu2 ...
	W0328 12:15:43.154610   19869 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:43.154646   19869 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:48.156862   19869 start.go:360] acquireMachinesLock for newest-cni-644000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:48.157283   19869 start.go:364] duration metric: took 307.875µs to acquireMachinesLock for "newest-cni-644000"
	I0328 12:15:48.157490   19869 start.go:93] Provisioning new machine with config: &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 12:15:48.157848   19869 start.go:125] createHost starting for "" (driver="qemu2")
	I0328 12:15:48.166382   19869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 12:15:48.215463   19869 start.go:159] libmachine.API.Create for "newest-cni-644000" (driver="qemu2")
	I0328 12:15:48.215516   19869 client.go:168] LocalClient.Create starting
	I0328 12:15:48.215618   19869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/ca.pem
	I0328 12:15:48.215678   19869 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:48.215699   19869 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:48.215755   19869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17877-15366/.minikube/certs/cert.pem
	I0328 12:15:48.215789   19869 main.go:141] libmachine: Decoding PEM data...
	I0328 12:15:48.215802   19869 main.go:141] libmachine: Parsing certificate...
	I0328 12:15:48.216333   19869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0328 12:15:48.407458   19869 main.go:141] libmachine: Creating SSH key...
	I0328 12:15:48.435102   19869 main.go:141] libmachine: Creating Disk image...
	I0328 12:15:48.435107   19869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0328 12:15:48.435265   19869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:48.447338   19869 main.go:141] libmachine: STDOUT: 
	I0328 12:15:48.447357   19869 main.go:141] libmachine: STDERR: 
	I0328 12:15:48.447421   19869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2 +20000M
	I0328 12:15:48.458154   19869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0328 12:15:48.458169   19869 main.go:141] libmachine: STDERR: 
	I0328 12:15:48.458181   19869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:48.458184   19869 main.go:141] libmachine: Starting QEMU VM...
	I0328 12:15:48.458227   19869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:13:1b:eb:c1:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:48.459992   19869 main.go:141] libmachine: STDOUT: 
	I0328 12:15:48.460005   19869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:48.460017   19869 client.go:171] duration metric: took 244.49225ms to LocalClient.Create
	I0328 12:15:50.462340   19869 start.go:128] duration metric: took 2.304414167s to createHost
	I0328 12:15:50.462418   19869 start.go:83] releasing machines lock for "newest-cni-644000", held for 2.305062916s
	W0328 12:15:50.462807   19869 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:50.472507   19869 out.go:177] 
	W0328 12:15:50.480588   19869 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:50.480615   19869 out.go:239] * 
	* 
	W0328 12:15:50.483562   19869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:50.491521   19869 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (69.646167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-925000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-925000 create -f testdata/busybox.yaml: exit status 1 (28.9465ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-925000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-925000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (31.077583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.94275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-925000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-925000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-925000 describe deploy/metrics-server -n kube-system: exit status 1 (26.794917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-925000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-925000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.905083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.183217083s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-925000" primary control-plane node in "default-k8s-diff-port-925000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:51.131706   19932 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:51.131851   19932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:51.131854   19932 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:51.131857   19932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:51.131997   19932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:51.132986   19932 out.go:298] Setting JSON to false
	I0328 12:15:51.149196   19932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11723,"bootTime":1711641628,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:51.149268   19932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:51.154134   19932 out.go:177] * [default-k8s-diff-port-925000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:51.160085   19932 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:51.163091   19932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:51.160136   19932 notify.go:220] Checking for updates...
	I0328 12:15:51.168958   19932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:51.172128   19932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:51.175074   19932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:51.176640   19932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:51.180318   19932 config.go:182] Loaded profile config "default-k8s-diff-port-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:51.180580   19932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:51.184080   19932 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:15:51.190042   19932 start.go:297] selected driver: qemu2
	I0328 12:15:51.190047   19932 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:51.190106   19932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:51.192412   19932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 12:15:51.192461   19932 cni.go:84] Creating CNI manager for ""
	I0328 12:15:51.192468   19932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:51.192494   19932 start.go:340] cluster config:
	{Name:default-k8s-diff-port-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-925000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:51.196739   19932 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:51.205054   19932 out.go:177] * Starting "default-k8s-diff-port-925000" primary control-plane node in "default-k8s-diff-port-925000" cluster
	I0328 12:15:51.209067   19932 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 12:15:51.209082   19932 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 12:15:51.209092   19932 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:51.209153   19932 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:51.209159   19932 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 12:15:51.209235   19932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/default-k8s-diff-port-925000/config.json ...
	I0328 12:15:51.209717   19932 start.go:360] acquireMachinesLock for default-k8s-diff-port-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:51.209744   19932 start.go:364] duration metric: took 21.459µs to acquireMachinesLock for "default-k8s-diff-port-925000"
	I0328 12:15:51.209753   19932 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:51.209758   19932 fix.go:54] fixHost starting: 
	I0328 12:15:51.209873   19932 fix.go:112] recreateIfNeeded on default-k8s-diff-port-925000: state=Stopped err=<nil>
	W0328 12:15:51.209884   19932 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:51.213090   19932 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-925000" ...
	I0328 12:15:51.218656   19932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e4:a6:d7:dc:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:51.220713   19932 main.go:141] libmachine: STDOUT: 
	I0328 12:15:51.220731   19932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:51.220759   19932 fix.go:56] duration metric: took 11.000042ms for fixHost
	I0328 12:15:51.220762   19932 start.go:83] releasing machines lock for "default-k8s-diff-port-925000", held for 11.014334ms
	W0328 12:15:51.220769   19932 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:51.220801   19932 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:51.220806   19932 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:56.222933   19932 start.go:360] acquireMachinesLock for default-k8s-diff-port-925000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:56.223248   19932 start.go:364] duration metric: took 240.5µs to acquireMachinesLock for "default-k8s-diff-port-925000"
	I0328 12:15:56.223379   19932 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:56.223396   19932 fix.go:54] fixHost starting: 
	I0328 12:15:56.224054   19932 fix.go:112] recreateIfNeeded on default-k8s-diff-port-925000: state=Stopped err=<nil>
	W0328 12:15:56.224080   19932 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:56.233385   19932 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-925000" ...
	I0328 12:15:56.236613   19932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e4:a6:d7:dc:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/default-k8s-diff-port-925000/disk.qcow2
	I0328 12:15:56.246461   19932 main.go:141] libmachine: STDOUT: 
	I0328 12:15:56.246573   19932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:56.246654   19932 fix.go:56] duration metric: took 23.256583ms for fixHost
	I0328 12:15:56.246680   19932 start.go:83] releasing machines lock for "default-k8s-diff-port-925000", held for 23.405333ms
	W0328 12:15:56.246900   19932 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:56.254428   19932 out.go:177] 
	W0328 12:15:56.258461   19932 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:56.258498   19932 out.go:239] * 
	* 
	W0328 12:15:56.261000   19932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:56.270243   19932 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-925000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (68.042667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.180705584s)

                                                
                                                
-- stdout --
	* [newest-cni-644000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-644000" primary control-plane node in "newest-cni-644000" cluster
	* Restarting existing qemu2 VM for "newest-cni-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:52.632497   19947 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:52.632648   19947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:52.632651   19947 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:52.632654   19947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:52.632779   19947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:52.633750   19947 out.go:298] Setting JSON to false
	I0328 12:15:52.649851   19947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11724,"bootTime":1711641628,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 12:15:52.649914   19947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 12:15:52.655024   19947 out.go:177] * [newest-cni-644000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 12:15:52.662021   19947 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 12:15:52.662049   19947 notify.go:220] Checking for updates...
	I0328 12:15:52.666113   19947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 12:15:52.669000   19947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 12:15:52.671996   19947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 12:15:52.674980   19947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 12:15:52.677931   19947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 12:15:52.681350   19947 config.go:182] Loaded profile config "newest-cni-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0328 12:15:52.681596   19947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 12:15:52.685964   19947 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 12:15:52.692954   19947 start.go:297] selected driver: qemu2
	I0328 12:15:52.692962   19947 start.go:901] validating driver "qemu2" against &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:52.693026   19947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 12:15:52.695317   19947 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0328 12:15:52.695357   19947 cni.go:84] Creating CNI manager for ""
	I0328 12:15:52.695366   19947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 12:15:52.695389   19947 start.go:340] cluster config:
	{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-644000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 12:15:52.699818   19947 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 12:15:52.707909   19947 out.go:177] * Starting "newest-cni-644000" primary control-plane node in "newest-cni-644000" cluster
	I0328 12:15:52.712030   19947 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 12:15:52.712045   19947 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0328 12:15:52.712060   19947 cache.go:56] Caching tarball of preloaded images
	I0328 12:15:52.712125   19947 preload.go:173] Found /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 12:15:52.712130   19947 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0328 12:15:52.712202   19947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/newest-cni-644000/config.json ...
	I0328 12:15:52.712680   19947 start.go:360] acquireMachinesLock for newest-cni-644000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:52.712707   19947 start.go:364] duration metric: took 21.209µs to acquireMachinesLock for "newest-cni-644000"
	I0328 12:15:52.712716   19947 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:52.712721   19947 fix.go:54] fixHost starting: 
	I0328 12:15:52.712844   19947 fix.go:112] recreateIfNeeded on newest-cni-644000: state=Stopped err=<nil>
	W0328 12:15:52.712852   19947 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:52.716941   19947 out.go:177] * Restarting existing qemu2 VM for "newest-cni-644000" ...
	I0328 12:15:52.724888   19947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:13:1b:eb:c1:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:52.726883   19947 main.go:141] libmachine: STDOUT: 
	I0328 12:15:52.726907   19947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:52.726938   19947 fix.go:56] duration metric: took 14.216875ms for fixHost
	I0328 12:15:52.726943   19947 start.go:83] releasing machines lock for "newest-cni-644000", held for 14.231375ms
	W0328 12:15:52.726950   19947 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:52.726984   19947 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:52.726990   19947 start.go:728] Will try again in 5 seconds ...
	I0328 12:15:57.729214   19947 start.go:360] acquireMachinesLock for newest-cni-644000: {Name:mk0e64b2b57b7837b722c50a9b29e4f2ce729d45 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 12:15:57.729568   19947 start.go:364] duration metric: took 228.833µs to acquireMachinesLock for "newest-cni-644000"
	I0328 12:15:57.729671   19947 start.go:96] Skipping create...Using existing machine configuration
	I0328 12:15:57.729692   19947 fix.go:54] fixHost starting: 
	I0328 12:15:57.730454   19947 fix.go:112] recreateIfNeeded on newest-cni-644000: state=Stopped err=<nil>
	W0328 12:15:57.730479   19947 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 12:15:57.734873   19947 out.go:177] * Restarting existing qemu2 VM for "newest-cni-644000" ...
	I0328 12:15:57.742128   19947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:13:1b:eb:c1:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17877-15366/.minikube/machines/newest-cni-644000/disk.qcow2
	I0328 12:15:57.749539   19947 main.go:141] libmachine: STDOUT: 
	I0328 12:15:57.749597   19947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0328 12:15:57.749662   19947 fix.go:56] duration metric: took 19.972792ms for fixHost
	I0328 12:15:57.749675   19947 start.go:83] releasing machines lock for "newest-cni-644000", held for 20.085667ms
	W0328 12:15:57.749868   19947 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0328 12:15:57.757862   19947 out.go:177] 
	W0328 12:15:57.760866   19947 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0328 12:15:57.760939   19947 out.go:239] * 
	* 
	W0328 12:15:57.762206   19947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 12:15:57.772841   19947 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (69.990959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-925000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (33.135791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-925000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.448ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-925000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.97775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-925000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.748458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-925000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-925000 --alsologtostderr -v=1: exit status 83 (45.77325ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-925000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-925000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:56.547739   19966 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:56.547880   19966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:56.547883   19966 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:56.547886   19966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:56.548033   19966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:56.548242   19966 out.go:298] Setting JSON to false
	I0328 12:15:56.548250   19966 mustload.go:65] Loading cluster: default-k8s-diff-port-925000
	I0328 12:15:56.548436   19966 config.go:182] Loaded profile config "default-k8s-diff-port-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 12:15:56.551619   19966 out.go:177] * The control-plane node default-k8s-diff-port-925000 host is not running: state=Stopped
	I0328 12:15:56.559590   19966 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-925000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-925000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.949667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (30.592333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-644000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (32.29375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1: exit status 83 (44.026542ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-644000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 12:15:57.962825   19999 out.go:291] Setting OutFile to fd 1 ...
	I0328 12:15:57.962979   19999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:57.962982   19999 out.go:304] Setting ErrFile to fd 2...
	I0328 12:15:57.962985   19999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 12:15:57.963116   19999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 12:15:57.963343   19999 out.go:298] Setting JSON to false
	I0328 12:15:57.963351   19999 mustload.go:65] Loading cluster: newest-cni-644000
	I0328 12:15:57.963544   19999 config.go:182] Loaded profile config "newest-cni-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0328 12:15:57.967544   19999 out.go:177] * The control-plane node newest-cni-644000 host is not running: state=Stopped
	I0328 12:15:57.971562   19999 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-644000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (32.105541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (32.121334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.29.3/json-events 20.84
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.24
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-beta.0/json-events 19.45
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.43
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.77
48 TestErrorSpam/start 0.41
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 9.32
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.08
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.36
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.25
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.18
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.29
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 5.04
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.39
267 TestNoKubernetes/serial/Stop 2.91
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.63
284 TestStartStop/group/old-k8s-version/serial/Stop 1.93
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
297 TestStartStop/group/no-preload/serial/Stop 2.99
298 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
302 TestStartStop/group/embed-certs/serial/Stop 1.87
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.29
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 1.84
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-603000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-603000: exit status 85 (97.21275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |          |
	|         | -p download-only-603000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 11:47:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 11:47:55.263329   15786 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:47:55.263551   15786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:47:55.263554   15786 out.go:304] Setting ErrFile to fd 2...
	I0328 11:47:55.263556   15786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:47:55.263674   15786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	W0328 11:47:55.263762   15786 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17877-15366/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17877-15366/.minikube/config/config.json: no such file or directory
	I0328 11:47:55.265009   15786 out.go:298] Setting JSON to true
	I0328 11:47:55.282555   15786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10047,"bootTime":1711641628,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:47:55.282630   15786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:47:55.288027   15786 out.go:97] [download-only-603000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:47:55.291933   15786 out.go:169] MINIKUBE_LOCATION=17877
	I0328 11:47:55.288120   15786 notify.go:220] Checking for updates...
	W0328 11:47:55.288184   15786 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball: no such file or directory
	I0328 11:47:55.299779   15786 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:47:55.302900   15786 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:47:55.305940   15786 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:47:55.312910   15786 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	W0328 11:47:55.320969   15786 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 11:47:55.321228   15786 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:47:55.325924   15786 out.go:97] Using the qemu2 driver based on user configuration
	I0328 11:47:55.325944   15786 start.go:297] selected driver: qemu2
	I0328 11:47:55.325960   15786 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:47:55.326046   15786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:47:55.328900   15786 out.go:169] Automatically selected the socket_vmnet network
	I0328 11:47:55.335290   15786 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0328 11:47:55.335437   15786 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 11:47:55.335510   15786 cni.go:84] Creating CNI manager for ""
	I0328 11:47:55.335529   15786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0328 11:47:55.335580   15786 start.go:340] cluster config:
	{Name:download-only-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:47:55.340378   15786 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:47:55.344973   15786 out.go:97] Downloading VM boot image ...
	I0328 11:47:55.344991   15786 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso
	I0328 11:48:13.097110   15786 out.go:97] Starting "download-only-603000" primary control-plane node in "download-only-603000" cluster
	I0328 11:48:13.097136   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:13.381865   15786 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 11:48:13.381982   15786 cache.go:56] Caching tarball of preloaded images
	I0328 11:48:13.382786   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:13.388743   15786 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0328 11:48:13.388775   15786 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:13.993666   15786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0328 11:48:33.441236   15786 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:33.441424   15786 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:34.139761   15786 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0328 11:48:34.139982   15786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-603000/config.json ...
	I0328 11:48:34.139998   15786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-603000/config.json: {Name:mk0d42e3126b55e5ccf673930b82c29c9b85121c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:48:34.141116   15786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0328 11:48:34.141308   15786 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0328 11:48:34.556207   15786 out.go:169] 
	W0328 11:48:34.560334   15786 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220 0x1071db220] Decompressors:map[bz2:0x140006a6a70 gz:0x140006a6a78 tar:0x140006a69a0 tar.bz2:0x140006a69e0 tar.gz:0x140006a69f0 tar.xz:0x140006a6a40 tar.zst:0x140006a6a50 tbz2:0x140006a69e0 tgz:0x140006a69f0 txz:0x140006a6a40 tzst:0x140006a6a50 xz:0x140006a6a80 zip:0x140006a6a90 zst:0x140006a6a88] Getters:map[file:0x1400079c8c0 http:0x140008141e0 https:0x14000814230] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0328 11:48:34.560361   15786 out_reason.go:110] 
	W0328 11:48:34.568218   15786 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 11:48:34.572264   15786 out.go:169] 
	
	
	* The control-plane node download-only-603000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-603000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-603000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (20.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-625000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-625000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (20.844458958s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (20.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-625000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-625000: exit status 85 (82.39875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |                     |
	|         | -p download-only-603000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| delete  | -p download-only-603000        | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| start   | -o=json --download-only        | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
	|         | -p download-only-625000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 11:48:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 11:48:35.261346   15841 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:48:35.261465   15841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:48:35.261469   15841 out.go:304] Setting ErrFile to fd 2...
	I0328 11:48:35.261471   15841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:48:35.261589   15841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:48:35.262602   15841 out.go:298] Setting JSON to true
	I0328 11:48:35.278613   15841 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10087,"bootTime":1711641628,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:48:35.278681   15841 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:48:35.282733   15841 out.go:97] [download-only-625000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:48:35.286503   15841 out.go:169] MINIKUBE_LOCATION=17877
	I0328 11:48:35.282838   15841 notify.go:220] Checking for updates...
	I0328 11:48:35.293635   15841 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:48:35.296591   15841 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:48:35.299626   15841 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:48:35.302614   15841 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	W0328 11:48:35.308549   15841 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 11:48:35.308742   15841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:48:35.311535   15841 out.go:97] Using the qemu2 driver based on user configuration
	I0328 11:48:35.311543   15841 start.go:297] selected driver: qemu2
	I0328 11:48:35.311546   15841 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:48:35.311584   15841 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:48:35.314534   15841 out.go:169] Automatically selected the socket_vmnet network
	I0328 11:48:35.319658   15841 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0328 11:48:35.319755   15841 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 11:48:35.319799   15841 cni.go:84] Creating CNI manager for ""
	I0328 11:48:35.319807   15841 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 11:48:35.319812   15841 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 11:48:35.319850   15841 start.go:340] cluster config:
	{Name:download-only-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:48:35.324074   15841 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:48:35.327632   15841 out.go:97] Starting "download-only-625000" primary control-plane node in "download-only-625000" cluster
	I0328 11:48:35.327642   15841 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:48:35.963138   15841 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:48:35.963207   15841 cache.go:56] Caching tarball of preloaded images
	I0328 11:48:35.963912   15841 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:48:35.969476   15841 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0328 11:48:35.969512   15841 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:36.562385   15841 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0328 11:48:52.779205   15841 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:52.779382   15841 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:53.337035   15841 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 11:48:53.337228   15841 profile.go:143] Saving config to /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-625000/config.json ...
	I0328 11:48:53.337244   15841 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17877-15366/.minikube/profiles/download-only-625000/config.json: {Name:mk6ba01e01344dfbad95dc398955a07c230c628f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 11:48:53.337470   15841 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 11:48:53.338280   15841 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-625000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-625000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-625000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (19.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-549000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-549000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 : (19.454690125s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (19.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-549000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-549000: exit status 85 (83.997708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:47 PDT |                     |
	|         | -p download-only-603000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| delete  | -p download-only-603000             | download-only-603000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| start   | -o=json --download-only             | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
	|         | -p download-only-625000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| delete  | -p download-only-625000             | download-only-625000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT | 28 Mar 24 11:48 PDT |
	| start   | -o=json --download-only             | download-only-549000 | jenkins | v1.33.0-beta.0 | 28 Mar 24 11:48 PDT |                     |
	|         | -p download-only-549000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 11:48:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 11:48:56.655409   15885 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:48:56.655541   15885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:48:56.655544   15885 out.go:304] Setting ErrFile to fd 2...
	I0328 11:48:56.655547   15885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:48:56.655681   15885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:48:56.656712   15885 out.go:298] Setting JSON to true
	I0328 11:48:56.672806   15885 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10108,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:48:56.672867   15885 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:48:56.676558   15885 out.go:97] [download-only-549000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:48:56.680428   15885 out.go:169] MINIKUBE_LOCATION=17877
	I0328 11:48:56.676636   15885 notify.go:220] Checking for updates...
	I0328 11:48:56.688450   15885 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:48:56.691464   15885 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:48:56.694474   15885 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:48:56.697425   15885 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	W0328 11:48:56.703446   15885 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 11:48:56.703627   15885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:48:56.706434   15885 out.go:97] Using the qemu2 driver based on user configuration
	I0328 11:48:56.706443   15885 start.go:297] selected driver: qemu2
	I0328 11:48:56.706447   15885 start.go:901] validating driver "qemu2" against <nil>
	I0328 11:48:56.706506   15885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 11:48:56.709388   15885 out.go:169] Automatically selected the socket_vmnet network
	I0328 11:48:56.714575   15885 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0328 11:48:56.714667   15885 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 11:48:56.714708   15885 cni.go:84] Creating CNI manager for ""
	I0328 11:48:56.714716   15885 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0328 11:48:56.714728   15885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 11:48:56.714776   15885 start.go:340] cluster config:
	{Name:download-only-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:48:56.719088   15885 iso.go:125] acquiring lock: {Name:mkbc175b071668eea8a5df8fa25a81c651c26194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 11:48:56.722423   15885 out.go:97] Starting "download-only-549000" primary control-plane node in "download-only-549000" cluster
	I0328 11:48:56.722430   15885 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 11:48:57.351085   15885 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0328 11:48:57.351190   15885 cache.go:56] Caching tarball of preloaded images
	I0328 11:48:57.352008   15885 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0328 11:48:57.357587   15885 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0328 11:48:57.357635   15885 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0328 11:48:57.942906   15885 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:e2591d3d8d44bfdea8fdcdf9682f34bf -> /Users/jenkins/minikube-integration/17877-15366/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-549000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-549000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-549000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.43s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-643000 --alsologtostderr --binary-mirror http://127.0.0.1:52935 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-643000
--- PASS: TestBinaryMirror (0.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-925000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-925000: exit status 85 (59.4975ms)

                                                
                                                
-- stdout --
	* Profile "addons-925000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-925000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-925000: exit status 85 (63.204042ms)

                                                
                                                
-- stdout --
	* Profile "addons-925000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.77s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status: exit status 7 (34.100625ms)

                                                
                                                
-- stdout --
	nospam-796000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status: exit status 7 (32.078709ms)

                                                
                                                
-- stdout --
	nospam-796000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status: exit status 7 (32.125292ms)

                                                
                                                
-- stdout --
	nospam-796000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause: exit status 83 (41.678958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause: exit status 83 (40.898583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause: exit status 83 (41.835917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause: exit status 83 (42.871209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause: exit status 83 (41.803958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause: exit status 83 (41.80075ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-796000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (9.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop: (3.565906625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop: (3.616800583s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-796000 stop: (2.138477167s)
--- PASS: TestErrorSpam/stop (9.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17877-15366/.minikube/files/etc/test/nested/copy/15784/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.1: (2.102593708s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.3: (2.15477925s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:latest: (1.826043792s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local700594033/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add minikube-local-cache-test:functional-908000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache delete minikube-local-cache-test:functional-908000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (32.2655ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (36.841916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (164.341958ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:51:16.171527   16551 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:51:16.171680   16551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.171684   16551 out.go:304] Setting ErrFile to fd 2...
	I0328 11:51:16.171688   16551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.171845   16551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:51:16.173183   16551 out.go:298] Setting JSON to false
	I0328 11:51:16.192375   16551 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10248,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:51:16.192438   16551 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:51:16.198197   16551 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0328 11:51:16.204086   16551 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:51:16.208084   16551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:51:16.204101   16551 notify.go:220] Checking for updates...
	I0328 11:51:16.216099   16551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:51:16.219102   16551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:51:16.222067   16551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:51:16.225095   16551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:51:16.228451   16551 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:51:16.228756   16551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:51:16.232047   16551 out.go:177] * Using the qemu2 driver based on existing profile
	I0328 11:51:16.239119   16551 start.go:297] selected driver: qemu2
	I0328 11:51:16.239126   16551 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:51:16.239188   16551 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:51:16.246023   16551 out.go:177] 
	W0328 11:51:16.250043   16551 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0328 11:51:16.253936   16551 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.110125ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 11:51:16.407781   16562 out.go:291] Setting OutFile to fd 1 ...
	I0328 11:51:16.407909   16562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.407915   16562 out.go:304] Setting ErrFile to fd 2...
	I0328 11:51:16.407918   16562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 11:51:16.408039   16562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17877-15366/.minikube/bin
	I0328 11:51:16.409559   16562 out.go:298] Setting JSON to false
	I0328 11:51:16.426173   16562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10248,"bootTime":1711641628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0328 11:51:16.426245   16562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 11:51:16.430065   16562 out.go:177] * [functional-908000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0328 11:51:16.437009   16562 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 11:51:16.437043   16562 notify.go:220] Checking for updates...
	I0328 11:51:16.445117   16562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	I0328 11:51:16.449026   16562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0328 11:51:16.452078   16562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 11:51:16.455076   16562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	I0328 11:51:16.458002   16562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 11:51:16.461348   16562 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 11:51:16.461609   16562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 11:51:16.466034   16562 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0328 11:51:16.473040   16562 start.go:297] selected driver: qemu2
	I0328 11:51:16.473045   16562 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 11:51:16.473094   16562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 11:51:16.478013   16562 out.go:177] 
	W0328 11:51:16.482008   16562 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0328 11:51:16.486073   16562 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.35525875s)
--- PASS: TestFunctional/parallel/License (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.20689425s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image rm gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-908000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save --daemon gcr.io/google-containers/addon-resizer:functional-908000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.942834ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.00375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "70.427375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.572042ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.010692042s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-908000
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-908000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.29s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-663000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-663000 --output=json --user=testUser: (3.287730792s)
--- PASS: TestJSONOutput/stop/Command (3.29s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-611000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-611000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.734416ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"323dc372-f9d2-4456-b789-51caf0b0575e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-611000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e147c9c-3213-4c62-9c92-c13ef4f1e019","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17877"}}
	{"specversion":"1.0","id":"dc16f646-42bc-41f0-a858-f481b9701d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig"}}
	{"specversion":"1.0","id":"220e573c-7577-40e1-a6dc-31d49c9b8dc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ea852042-c663-4a48-bfb0-1b9758cc1bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"690e4c04-1b87-4a70-8f2c-ac6e70c17517","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube"}}
	{"specversion":"1.0","id":"176a9b49-9a6a-42e7-92de-cd899a36751b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3206a6a7-a468-4373-92cd-26285bc5e2af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-611000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-611000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-860000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.744708ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17877-15366/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17877-15366/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-860000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-860000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.87325ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-860000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.677977916s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.708647458s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-860000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-860000: (2.910528125s)
--- PASS: TestNoKubernetes/serial/Stop (2.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-860000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-860000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.789041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-860000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-732000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-648000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-648000 --alsologtostderr -v=3: (1.931917875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-648000 -n old-k8s-version-648000: exit status 7 (45.183208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-648000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-293000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-293000 --alsologtostderr -v=3: (2.989567541s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-293000 -n no-preload-293000: exit status 7 (59.117458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-293000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-778000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-778000 --alsologtostderr -v=3: (1.865927042s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-778000 -n embed-certs-778000: exit status 7 (56.41125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-778000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-925000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-925000 --alsologtostderr -v=3: (3.29382125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-644000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-644000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-644000 --alsologtostderr -v=3: (1.839949416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-925000 -n default-k8s-diff-port-925000: exit status 7 (59.834208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-925000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (57.482958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-644000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3308791504/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711651835450152000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3308791504/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711651835450152000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3308791504/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711651835450152000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3308791504/001/test-1711651835450152000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.840875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.517625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.938042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.364667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.499ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.2725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.351292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.610958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (48.7345ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3308791504/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port49741585/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.533042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.314583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.933208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.400083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.125792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.146958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.001041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.347292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (51.733667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port49741585/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (76.995334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.08725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (87.04225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (89.1175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (86.010584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (88.967875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (91.9245ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2002831064/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.51s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-772000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-772000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-772000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-772000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-772000"

                                                
                                                
----------------------- debugLogs end: cilium-772000 [took: 2.426660458s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-772000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-772000
--- SKIP: TestNetworkPlugins/group/cilium (2.66s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-776000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-776000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard