Test Report: QEMU_macOS 18158

                    
                      89f58c8e24abd36bf3098da28321dad15a54de9c:2024-03-27:33771
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.12
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.01
36 TestAddons/Setup 10.18
37 TestCertOptions 10.23
38 TestCertExpiration 195.26
39 TestDockerFlags 10.03
40 TestForceSystemdFlag 10.15
41 TestForceSystemdEnv 10.06
47 TestErrorSpam/setup 9.78
56 TestFunctional/serial/StartWithProxy 9.94
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 105.93
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.45
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 21.74
150 TestMultiControlPlane/serial/StartCluster 10.04
151 TestMultiControlPlane/serial/DeployApp 114.92
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.07
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 41.26
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.9
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 3.4
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.94
174 TestJSONOutput/start/Command 9.82
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.29
206 TestMountStart/serial/StartWithMountFirst 10.62
209 TestMultiNode/serial/FreshStart2Nodes 9.95
210 TestMultiNode/serial/DeployApp2Nodes 106.18
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.1
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 45.96
218 TestMultiNode/serial/RestartKeepsNodes 7.45
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.42
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.41
226 TestPreload 10.19
228 TestScheduledStopUnix 10.57
229 TestSkaffold 16.96
232 TestRunningBinaryUpgrade 635.43
234 TestKubernetesUpgrade 18.71
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.24
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.2
250 TestStoppedBinaryUpgrade/Upgrade 585.99
252 TestPause/serial/Start 10.31
262 TestNoKubernetes/serial/StartWithK8s 9.88
263 TestNoKubernetes/serial/StartWithStopK8s 6.37
264 TestNoKubernetes/serial/Start 5.89
268 TestNoKubernetes/serial/StartNoArgs 6.37
270 TestNetworkPlugins/group/auto/Start 9.87
271 TestNetworkPlugins/group/calico/Start 9.75
272 TestNetworkPlugins/group/custom-flannel/Start 9.79
273 TestNetworkPlugins/group/false/Start 9.75
274 TestNetworkPlugins/group/kindnet/Start 9.85
275 TestNetworkPlugins/group/flannel/Start 9.79
276 TestNetworkPlugins/group/enable-default-cni/Start 10.19
278 TestNetworkPlugins/group/bridge/Start 9.91
279 TestNetworkPlugins/group/kubenet/Start 9.76
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
283 TestStartStop/group/no-preload/serial/FirstStart 10.09
284 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/old-k8s-version/serial/SecondStart 5.29
289 TestStartStop/group/no-preload/serial/DeployApp 0.09
290 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
293 TestStartStop/group/no-preload/serial/SecondStart 5.27
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
297 TestStartStop/group/old-k8s-version/serial/Pause 0.11
299 TestStartStop/group/embed-certs/serial/FirstStart 9.79
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.82
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/embed-certs/serial/SecondStart 5.78
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 9.99
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-978000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-978000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.1182845s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dad431ae-8fb9-4313-9c1d-2c34cb2c8d9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-978000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09a853d6-c470-4b56-913f-c4c6cfb310e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18158"}}
	{"specversion":"1.0","id":"866b017c-fa98-4380-8d97-0e6251a4d948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig"}}
	{"specversion":"1.0","id":"eee7486a-7a37-40cd-aba8-6b51a69fc8f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3cd3c449-5235-4328-a936-8b3466f2d214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd1150b0-c79a-4e84-9379-b0eeb0b5e769","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube"}}
	{"specversion":"1.0","id":"13c20a7b-0108-4897-9fd5-8d60af6fd959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0406b0e1-f12c-4460-b829-b2d88278a7a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"022fddab-2ac9-48a7-a5d7-4e589f9764fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"32f01cf6-0a4b-4cc3-9269-f91039e6678f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"53541c15-3cc5-4c10-9bef-df9dd6ba6bb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-978000\" primary control-plane node in \"download-only-978000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2701f272-b53a-40b3-a6aa-b775004674be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3c0f2e0-8ee6-4ce8-9f0e-42e199771a93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220] Decompressors:map[bz2:0x1400051d730 gz:0x1400051d738 tar:0x1400051d6c0 tar.bz2:0x1400051d6d0 tar.gz:0x1400051d6f0 tar.xz:0x1400051d700 tar.zst:0x1400051d710 tbz2:0x1400051d6d0 tgz:0x1
400051d6f0 txz:0x1400051d700 tzst:0x1400051d710 xz:0x1400051d740 zip:0x1400051d750 zst:0x1400051d748] Getters:map[file:0x14002506640 http:0x1400090c2d0 https:0x1400090c3c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"28929621-8426-485a-b9fc-703ae038dff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:45:24.593176   11754 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:45:24.593346   11754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:45:24.593349   11754 out.go:304] Setting ErrFile to fd 2...
	I0327 13:45:24.593351   11754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:45:24.593463   11754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	W0327 13:45:24.593549   11754 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18158-11341/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18158-11341/.minikube/config/config.json: no such file or directory
	I0327 13:45:24.594890   11754 out.go:298] Setting JSON to true
	I0327 13:45:24.612653   11754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6294,"bootTime":1711566030,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:45:24.612721   11754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:45:24.617897   11754 out.go:97] [download-only-978000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:45:24.621745   11754 out.go:169] MINIKUBE_LOCATION=18158
	I0327 13:45:24.618056   11754 notify.go:220] Checking for updates...
	W0327 13:45:24.618098   11754 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 13:45:24.627091   11754 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:45:24.629787   11754 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:45:24.632824   11754 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:45:24.635800   11754 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	W0327 13:45:24.641778   11754 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 13:45:24.641965   11754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:45:24.644756   11754 out.go:97] Using the qemu2 driver based on user configuration
	I0327 13:45:24.644773   11754 start.go:297] selected driver: qemu2
	I0327 13:45:24.644787   11754 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:45:24.644841   11754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:45:24.647720   11754 out.go:169] Automatically selected the socket_vmnet network
	I0327 13:45:24.653062   11754 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 13:45:24.653173   11754 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:45:24.653264   11754 cni.go:84] Creating CNI manager for ""
	I0327 13:45:24.653282   11754 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 13:45:24.653330   11754 start.go:340] cluster config:
	{Name:download-only-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 13:45:24.658218   11754 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:45:24.662639   11754 out.go:97] Downloading VM boot image ...
	I0327 13:45:24.662668   11754 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso
	I0327 13:45:41.876625   11754 out.go:97] Starting "download-only-978000" primary control-plane node in "download-only-978000" cluster
	I0327 13:45:41.876666   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:45:42.146232   11754 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 13:45:42.146307   11754 cache.go:56] Caching tarball of preloaded images
	I0327 13:45:42.147684   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:45:42.151068   11754 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 13:45:42.151094   11754 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:45:42.720876   11754 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 13:46:02.565910   11754 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:02.566090   11754 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:03.263993   11754 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 13:46:03.264200   11754 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-978000/config.json ...
	I0327 13:46:03.264220   11754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-978000/config.json: {Name:mk7caafd3c9c2f6d5198e090232e1e442ddbf929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:46:03.264459   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:46:03.265336   11754 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 13:46:03.630556   11754 out.go:169] 
	W0327 13:46:03.637681   11754 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220] Decompressors:map[bz2:0x1400051d730 gz:0x1400051d738 tar:0x1400051d6c0 tar.bz2:0x1400051d6d0 tar.gz:0x1400051d6f0 tar.xz:0x1400051d700 tar.zst:0x1400051d710 tbz2:0x1400051d6d0 tgz:0x1400051d6f0 txz:0x1400051d700 tzst:0x1400051d710 xz:0x1400051d740 zip:0x1400051d750 zst:0x1400051d748] Getters:map[file:0x14002506640 http:0x1400090c2d0 https:0x1400090c3c0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 13:46:03.637717   11754 out_reason.go:110] 
	W0327 13:46:03.644504   11754 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:46:03.648543   11754 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-978000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-484000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-484000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.825070166s)

                                                
                                                
-- stdout --
	* [offline-docker-484000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-484000" primary control-plane node in "offline-docker-484000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-484000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:58:16.539320   13528 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:58:16.539444   13528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:16.539447   13528 out.go:304] Setting ErrFile to fd 2...
	I0327 13:58:16.539450   13528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:16.539578   13528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:58:16.540643   13528 out.go:298] Setting JSON to false
	I0327 13:58:16.558110   13528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7066,"bootTime":1711566030,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:58:16.558184   13528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:58:16.563380   13528 out.go:177] * [offline-docker-484000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:58:16.568341   13528 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:58:16.568354   13528 notify.go:220] Checking for updates...
	I0327 13:58:16.575212   13528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:58:16.578372   13528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:58:16.581374   13528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:58:16.582478   13528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:58:16.585400   13528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:58:16.588781   13528 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:16.588835   13528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:58:16.593182   13528 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:58:16.600299   13528 start.go:297] selected driver: qemu2
	I0327 13:58:16.600310   13528 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:58:16.600318   13528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:58:16.602349   13528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:58:16.605369   13528 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:58:16.608423   13528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:58:16.608457   13528 cni.go:84] Creating CNI manager for ""
	I0327 13:58:16.608463   13528 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:58:16.608467   13528 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:58:16.608498   13528 start.go:340] cluster config:
	{Name:offline-docker-484000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cl
ient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:58:16.613134   13528 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:58:16.619210   13528 out.go:177] * Starting "offline-docker-484000" primary control-plane node in "offline-docker-484000" cluster
	I0327 13:58:16.623294   13528 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:58:16.623327   13528 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:58:16.623337   13528 cache.go:56] Caching tarball of preloaded images
	I0327 13:58:16.623407   13528 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:58:16.623412   13528 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:58:16.623470   13528 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/offline-docker-484000/config.json ...
	I0327 13:58:16.623480   13528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/offline-docker-484000/config.json: {Name:mk2259a3ed63d01dbcc2eaaf43cd8810bc5d4eb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:58:16.623720   13528 start.go:360] acquireMachinesLock for offline-docker-484000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:16.623749   13528 start.go:364] duration metric: took 23.291µs to acquireMachinesLock for "offline-docker-484000"
	I0327 13:58:16.623760   13528 start.go:93] Provisioning new machine with config: &{Name:offline-docker-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:offline-docker-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:16.623791   13528 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:16.627345   13528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:16.642395   13528 start.go:159] libmachine.API.Create for "offline-docker-484000" (driver="qemu2")
	I0327 13:58:16.642422   13528 client.go:168] LocalClient.Create starting
	I0327 13:58:16.642487   13528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:16.642517   13528 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:16.642526   13528 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:16.642572   13528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:16.642594   13528 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:16.642599   13528 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:16.642967   13528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:16.784228   13528 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:16.921864   13528 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:16.921871   13528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:16.926083   13528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:16.938283   13528 main.go:141] libmachine: STDOUT: 
	I0327 13:58:16.938306   13528 main.go:141] libmachine: STDERR: 
	I0327 13:58:16.938360   13528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2 +20000M
	I0327 13:58:16.949831   13528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:16.949852   13528 main.go:141] libmachine: STDERR: 
	I0327 13:58:16.949874   13528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:16.949879   13528 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:16.949912   13528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ee:5c:74:9c:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:16.951751   13528 main.go:141] libmachine: STDOUT: 
	I0327 13:58:16.951767   13528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:16.951792   13528 client.go:171] duration metric: took 309.369583ms to LocalClient.Create
	I0327 13:58:18.953841   13528 start.go:128] duration metric: took 2.330071s to createHost
	I0327 13:58:18.953863   13528 start.go:83] releasing machines lock for "offline-docker-484000", held for 2.330139583s
	W0327 13:58:18.953885   13528 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:18.961756   13528 out.go:177] * Deleting "offline-docker-484000" in qemu2 ...
	W0327 13:58:18.970834   13528 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:18.970846   13528 start.go:728] Will try again in 5 seconds ...
	I0327 13:58:23.973019   13528 start.go:360] acquireMachinesLock for offline-docker-484000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:23.973440   13528 start.go:364] duration metric: took 299.5µs to acquireMachinesLock for "offline-docker-484000"
	I0327 13:58:23.973578   13528 start.go:93] Provisioning new machine with config: &{Name:offline-docker-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:offline-docker-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:23.973869   13528 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:23.981125   13528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:24.031868   13528 start.go:159] libmachine.API.Create for "offline-docker-484000" (driver="qemu2")
	I0327 13:58:24.031926   13528 client.go:168] LocalClient.Create starting
	I0327 13:58:24.032033   13528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:24.032120   13528 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:24.032137   13528 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:24.032209   13528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:24.032265   13528 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:24.032278   13528 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:24.032818   13528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:24.184096   13528 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:24.255734   13528 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:24.255739   13528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:24.255907   13528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:24.268059   13528 main.go:141] libmachine: STDOUT: 
	I0327 13:58:24.268092   13528 main.go:141] libmachine: STDERR: 
	I0327 13:58:24.268154   13528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2 +20000M
	I0327 13:58:24.278938   13528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:24.278960   13528 main.go:141] libmachine: STDERR: 
	I0327 13:58:24.278973   13528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:24.278977   13528 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:24.279009   13528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:7a:bb:8d:01:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/offline-docker-484000/disk.qcow2
	I0327 13:58:24.280760   13528 main.go:141] libmachine: STDOUT: 
	I0327 13:58:24.280781   13528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:24.280796   13528 client.go:171] duration metric: took 248.868458ms to LocalClient.Create
	I0327 13:58:26.282947   13528 start.go:128] duration metric: took 2.309072541s to createHost
	I0327 13:58:26.283070   13528 start.go:83] releasing machines lock for "offline-docker-484000", held for 2.309634625s
	W0327 13:58:26.283387   13528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:26.296008   13528 out.go:177] 
	W0327 13:58:26.299958   13528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:58:26.300000   13528 out.go:239] * 
	* 
	W0327 13:58:26.302828   13528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:58:26.316995   13528 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-484000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-27 13:58:26.333381 -0700 PDT m=+781.838157793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-484000 -n offline-docker-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-484000 -n offline-docker-484000: exit status 7 (70.21725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-484000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-484000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-484000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-714000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-714000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.178971375s)

                                                
                                                
-- stdout --
	* [addons-714000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-714000" primary control-plane node in "addons-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:46:56.151882   11941 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:46:56.152035   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:56.152038   11941 out.go:304] Setting ErrFile to fd 2...
	I0327 13:46:56.152041   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:56.152171   11941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:46:56.153263   11941 out.go:298] Setting JSON to false
	I0327 13:46:56.169282   11941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6386,"bootTime":1711566030,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:46:56.169345   11941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:46:56.174498   11941 out.go:177] * [addons-714000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:46:56.181526   11941 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:46:56.185433   11941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:46:56.181575   11941 notify.go:220] Checking for updates...
	I0327 13:46:56.191426   11941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:46:56.194494   11941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:46:56.197452   11941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:46:56.200459   11941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:46:56.203596   11941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:46:56.207497   11941 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:46:56.214416   11941 start.go:297] selected driver: qemu2
	I0327 13:46:56.214423   11941 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:46:56.214428   11941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:46:56.216725   11941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:46:56.220431   11941 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:46:56.223608   11941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:46:56.223653   11941 cni.go:84] Creating CNI manager for ""
	I0327 13:46:56.223667   11941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:46:56.223672   11941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:46:56.223712   11941 start.go:340] cluster config:
	{Name:addons-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:46:56.228144   11941 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:46:56.236457   11941 out.go:177] * Starting "addons-714000" primary control-plane node in "addons-714000" cluster
	I0327 13:46:56.240469   11941 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:46:56.240484   11941 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:46:56.240493   11941 cache.go:56] Caching tarball of preloaded images
	I0327 13:46:56.240544   11941 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:46:56.240549   11941 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:46:56.240779   11941 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/addons-714000/config.json ...
	I0327 13:46:56.240790   11941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/addons-714000/config.json: {Name:mk81d656e534ac8d8e1d46af910e6a5b9dad19f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:46:56.241004   11941 start.go:360] acquireMachinesLock for addons-714000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:46:56.241171   11941 start.go:364] duration metric: took 161.875µs to acquireMachinesLock for "addons-714000"
	I0327 13:46:56.241183   11941 start.go:93] Provisioning new machine with config: &{Name:addons-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons
-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:46:56.241213   11941 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:46:56.244531   11941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 13:46:56.261959   11941 start.go:159] libmachine.API.Create for "addons-714000" (driver="qemu2")
	I0327 13:46:56.261982   11941 client.go:168] LocalClient.Create starting
	I0327 13:46:56.262097   11941 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:46:56.319890   11941 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:46:56.400234   11941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:46:56.657566   11941 main.go:141] libmachine: Creating SSH key...
	I0327 13:46:56.810972   11941 main.go:141] libmachine: Creating Disk image...
	I0327 13:46:56.810979   11941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:46:56.811163   11941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:46:56.823888   11941 main.go:141] libmachine: STDOUT: 
	I0327 13:46:56.823910   11941 main.go:141] libmachine: STDERR: 
	I0327 13:46:56.823957   11941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2 +20000M
	I0327 13:46:56.834594   11941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:46:56.834617   11941 main.go:141] libmachine: STDERR: 
	I0327 13:46:56.834627   11941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:46:56.834635   11941 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:46:56.834667   11941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:5a:b2:b9:ee:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:46:56.836475   11941 main.go:141] libmachine: STDOUT: 
	I0327 13:46:56.836501   11941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:46:56.836519   11941 client.go:171] duration metric: took 574.538292ms to LocalClient.Create
	I0327 13:46:58.838716   11941 start.go:128] duration metric: took 2.597508625s to createHost
	I0327 13:46:58.838800   11941 start.go:83] releasing machines lock for "addons-714000", held for 2.5976515s
	W0327 13:46:58.838892   11941 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:46:58.849815   11941 out.go:177] * Deleting "addons-714000" in qemu2 ...
	W0327 13:46:58.878006   11941 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:46:58.878048   11941 start.go:728] Will try again in 5 seconds ...
	I0327 13:47:03.878741   11941 start.go:360] acquireMachinesLock for addons-714000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:47:03.879192   11941 start.go:364] duration metric: took 310.75µs to acquireMachinesLock for "addons-714000"
	I0327 13:47:03.879339   11941 start.go:93] Provisioning new machine with config: &{Name:addons-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons
-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:47:03.879620   11941 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:47:03.889168   11941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 13:47:03.940610   11941 start.go:159] libmachine.API.Create for "addons-714000" (driver="qemu2")
	I0327 13:47:03.940672   11941 client.go:168] LocalClient.Create starting
	I0327 13:47:03.940806   11941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:47:03.940882   11941 main.go:141] libmachine: Decoding PEM data...
	I0327 13:47:03.940902   11941 main.go:141] libmachine: Parsing certificate...
	I0327 13:47:03.941000   11941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:47:03.941051   11941 main.go:141] libmachine: Decoding PEM data...
	I0327 13:47:03.941065   11941 main.go:141] libmachine: Parsing certificate...
	I0327 13:47:03.941624   11941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:47:04.093103   11941 main.go:141] libmachine: Creating SSH key...
	I0327 13:47:04.229787   11941 main.go:141] libmachine: Creating Disk image...
	I0327 13:47:04.229793   11941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:47:04.230010   11941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:47:04.242744   11941 main.go:141] libmachine: STDOUT: 
	I0327 13:47:04.242765   11941 main.go:141] libmachine: STDERR: 
	I0327 13:47:04.242822   11941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2 +20000M
	I0327 13:47:04.253548   11941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:47:04.253568   11941 main.go:141] libmachine: STDERR: 
	I0327 13:47:04.253579   11941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:47:04.253583   11941 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:47:04.253611   11941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:9f:aa:d1:13:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/addons-714000/disk.qcow2
	I0327 13:47:04.255335   11941 main.go:141] libmachine: STDOUT: 
	I0327 13:47:04.255352   11941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:47:04.255365   11941 client.go:171] duration metric: took 314.689791ms to LocalClient.Create
	I0327 13:47:06.257654   11941 start.go:128] duration metric: took 2.377999958s to createHost
	I0327 13:47:06.257719   11941 start.go:83] releasing machines lock for "addons-714000", held for 2.378533875s
	W0327 13:47:06.258048   11941 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:47:06.267530   11941 out.go:177] 
	W0327 13:47:06.274606   11941 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:47:06.274642   11941 out.go:239] * 
	* 
	W0327 13:47:06.277442   11941 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:47:06.285471   11941 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-714000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.18s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-468000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-468000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.943736791s)

                                                
                                                
-- stdout --
	* [cert-options-468000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-468000" primary control-plane node in "cert-options-468000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-468000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-468000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-468000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-468000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-468000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.063792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-468000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-468000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-468000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-468000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-468000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-468000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.7165ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-468000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-468000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-468000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-468000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-468000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-27 13:58:56.711977 -0700 PDT m=+812.217135459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-468000 -n cert-options-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-468000 -n cert-options-468000: exit status 7 (32.122208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-468000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-468000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-468000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.860385916s)

                                                
                                                
-- stdout --
	* [cert-expiration-514000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-514000" primary control-plane node in "cert-expiration-514000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-514000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-514000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.253066s)

                                                
                                                
-- stdout --
	* [cert-expiration-514000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-514000" primary control-plane node in "cert-expiration-514000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-514000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-514000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-514000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-514000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-514000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-514000" primary control-plane node in "cert-expiration-514000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-514000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-514000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-514000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-27 14:01:56.637174 -0700 PDT m=+992.144598084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-514000 -n cert-expiration-514000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-514000 -n cert-expiration-514000: exit status 7 (46.256917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-514000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-514000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-514000
--- FAIL: TestCertExpiration (195.26s)

                                                
                                    
x
+
TestDockerFlags (10.03s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.777336334s)

                                                
                                                
-- stdout --
	* [docker-flags-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-637000" primary control-plane node in "docker-flags-637000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-637000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:58:36.609015   13724 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:58:36.609150   13724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:36.609154   13724 out.go:304] Setting ErrFile to fd 2...
	I0327 13:58:36.609156   13724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:36.609275   13724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:58:36.610323   13724 out.go:298] Setting JSON to false
	I0327 13:58:36.626523   13724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7086,"bootTime":1711566030,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:58:36.626593   13724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:58:36.631477   13724 out.go:177] * [docker-flags-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:58:36.637486   13724 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:58:36.637535   13724 notify.go:220] Checking for updates...
	I0327 13:58:36.643451   13724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:58:36.646485   13724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:58:36.647878   13724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:58:36.651469   13724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:58:36.654491   13724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:58:36.657882   13724 config.go:182] Loaded profile config "force-systemd-flag-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:36.657949   13724 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:36.658001   13724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:58:36.662468   13724 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:58:36.669400   13724 start.go:297] selected driver: qemu2
	I0327 13:58:36.669405   13724 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:58:36.669410   13724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:58:36.671638   13724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:58:36.674438   13724 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:58:36.677565   13724 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0327 13:58:36.677617   13724 cni.go:84] Creating CNI manager for ""
	I0327 13:58:36.677626   13724 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:58:36.677630   13724 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:58:36.677669   13724 start.go:340] cluster config:
	{Name:docker-flags-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:58:36.682098   13724 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:58:36.689493   13724 out.go:177] * Starting "docker-flags-637000" primary control-plane node in "docker-flags-637000" cluster
	I0327 13:58:36.693416   13724 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:58:36.693433   13724 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:58:36.693444   13724 cache.go:56] Caching tarball of preloaded images
	I0327 13:58:36.693499   13724 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:58:36.693505   13724 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:58:36.693576   13724 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/docker-flags-637000/config.json ...
	I0327 13:58:36.693588   13724 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/docker-flags-637000/config.json: {Name:mkdde96696e36fae4e944eee322f39c69cc01e8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:58:36.693813   13724 start.go:360] acquireMachinesLock for docker-flags-637000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:36.693864   13724 start.go:364] duration metric: took 43.958µs to acquireMachinesLock for "docker-flags-637000"
	I0327 13:58:36.693875   13724 start.go:93] Provisioning new machine with config: &{Name:docker-flags-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:docker-flags-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:36.693911   13724 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:36.697438   13724 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:36.714892   13724 start.go:159] libmachine.API.Create for "docker-flags-637000" (driver="qemu2")
	I0327 13:58:36.714924   13724 client.go:168] LocalClient.Create starting
	I0327 13:58:36.714990   13724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:36.715025   13724 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:36.715038   13724 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:36.715091   13724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:36.715115   13724 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:36.715124   13724 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:36.715502   13724 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:36.856165   13724 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:36.891056   13724 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:36.891062   13724 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:36.891215   13724 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:36.903673   13724 main.go:141] libmachine: STDOUT: 
	I0327 13:58:36.903695   13724 main.go:141] libmachine: STDERR: 
	I0327 13:58:36.903758   13724 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2 +20000M
	I0327 13:58:36.914444   13724 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:36.914459   13724 main.go:141] libmachine: STDERR: 
	I0327 13:58:36.914479   13724 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:36.914483   13724 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:36.914517   13724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:44:57:0d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:36.916256   13724 main.go:141] libmachine: STDOUT: 
	I0327 13:58:36.916270   13724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:36.916294   13724 client.go:171] duration metric: took 201.361209ms to LocalClient.Create
	I0327 13:58:38.918501   13724 start.go:128] duration metric: took 2.224573541s to createHost
	I0327 13:58:38.918634   13724 start.go:83] releasing machines lock for "docker-flags-637000", held for 2.224787375s
	W0327 13:58:38.918691   13724 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:38.936511   13724 out.go:177] * Deleting "docker-flags-637000" in qemu2 ...
	W0327 13:58:38.956203   13724 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:38.956225   13724 start.go:728] Will try again in 5 seconds ...
	I0327 13:58:43.958354   13724 start.go:360] acquireMachinesLock for docker-flags-637000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:43.958662   13724 start.go:364] duration metric: took 229.75µs to acquireMachinesLock for "docker-flags-637000"
	I0327 13:58:43.958754   13724 start.go:93] Provisioning new machine with config: &{Name:docker-flags-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:docker-flags-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:43.958958   13724 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:43.966490   13724 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:44.010852   13724 start.go:159] libmachine.API.Create for "docker-flags-637000" (driver="qemu2")
	I0327 13:58:44.010905   13724 client.go:168] LocalClient.Create starting
	I0327 13:58:44.011020   13724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:44.011073   13724 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:44.011091   13724 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:44.011163   13724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:44.011204   13724 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:44.011223   13724 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:44.011741   13724 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:44.160474   13724 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:44.276787   13724 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:44.276792   13724 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:44.276950   13724 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:44.289475   13724 main.go:141] libmachine: STDOUT: 
	I0327 13:58:44.289531   13724 main.go:141] libmachine: STDERR: 
	I0327 13:58:44.289583   13724 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2 +20000M
	I0327 13:58:44.300329   13724 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:44.300349   13724 main.go:141] libmachine: STDERR: 
	I0327 13:58:44.300360   13724 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:44.300364   13724 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:44.300396   13724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a2:50:ae:7a:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/docker-flags-637000/disk.qcow2
	I0327 13:58:44.302111   13724 main.go:141] libmachine: STDOUT: 
	I0327 13:58:44.302127   13724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:44.302150   13724 client.go:171] duration metric: took 291.232ms to LocalClient.Create
	I0327 13:58:46.304362   13724 start.go:128] duration metric: took 2.345365584s to createHost
	I0327 13:58:46.304439   13724 start.go:83] releasing machines lock for "docker-flags-637000", held for 2.345788792s
	W0327 13:58:46.304785   13724 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:46.318428   13724 out.go:177] 
	W0327 13:58:46.322518   13724 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:58:46.322544   13724 out.go:239] * 
	* 
	W0327 13:58:46.325095   13724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:58:46.340402   13724 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.470083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-637000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-637000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-637000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-637000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-637000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-637000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.183334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-637000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-637000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-637000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-637000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-27 13:58:46.481172 -0700 PDT m=+801.986201834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-637000 -n docker-flags-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-637000 -n docker-flags-637000: exit status 7 (30.855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-637000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-637000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-637000
--- FAIL: TestDockerFlags (10.03s)

                                                
                                    
x
+
TestForceSystemdFlag (10.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-965000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-965000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.926592959s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-965000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-965000" primary control-plane node in "force-systemd-flag-965000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-965000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:58:31.404351   13702 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:58:31.404511   13702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:31.404514   13702 out.go:304] Setting ErrFile to fd 2...
	I0327 13:58:31.404516   13702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:31.404641   13702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:58:31.405687   13702 out.go:298] Setting JSON to false
	I0327 13:58:31.421618   13702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7081,"bootTime":1711566030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:58:31.421685   13702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:58:31.427822   13702 out.go:177] * [force-systemd-flag-965000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:58:31.434859   13702 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:58:31.434891   13702 notify.go:220] Checking for updates...
	I0327 13:58:31.440805   13702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:58:31.447809   13702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:58:31.450725   13702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:58:31.453771   13702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:58:31.456775   13702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:58:31.460072   13702 config.go:182] Loaded profile config "force-systemd-env-694000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:31.460142   13702 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:31.460208   13702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:58:31.464747   13702 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:58:31.470732   13702 start.go:297] selected driver: qemu2
	I0327 13:58:31.470739   13702 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:58:31.470744   13702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:58:31.472956   13702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:58:31.475733   13702 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:58:31.478856   13702 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:58:31.478892   13702 cni.go:84] Creating CNI manager for ""
	I0327 13:58:31.478899   13702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:58:31.478903   13702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:58:31.478938   13702 start.go:340] cluster config:
	{Name:force-systemd-flag-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:58:31.483508   13702 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:58:31.490804   13702 out.go:177] * Starting "force-systemd-flag-965000" primary control-plane node in "force-systemd-flag-965000" cluster
	I0327 13:58:31.494781   13702 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:58:31.494795   13702 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:58:31.494801   13702 cache.go:56] Caching tarball of preloaded images
	I0327 13:58:31.494852   13702 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:58:31.494858   13702 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:58:31.494915   13702 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/force-systemd-flag-965000/config.json ...
	I0327 13:58:31.494926   13702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/force-systemd-flag-965000/config.json: {Name:mke29072e0839b32f2735cfdfd2a5ce60f1e1187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:58:31.495142   13702 start.go:360] acquireMachinesLock for force-systemd-flag-965000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:31.495175   13702 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "force-systemd-flag-965000"
	I0327 13:58:31.495188   13702 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:force-systemd-flag-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:31.495212   13702 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:31.502807   13702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:31.520146   13702 start.go:159] libmachine.API.Create for "force-systemd-flag-965000" (driver="qemu2")
	I0327 13:58:31.520182   13702 client.go:168] LocalClient.Create starting
	I0327 13:58:31.520237   13702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:31.520266   13702 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:31.520276   13702 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:31.520321   13702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:31.520343   13702 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:31.520351   13702 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:31.520790   13702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:31.668648   13702 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:31.707604   13702 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:31.707609   13702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:31.707767   13702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:31.720093   13702 main.go:141] libmachine: STDOUT: 
	I0327 13:58:31.720114   13702 main.go:141] libmachine: STDERR: 
	I0327 13:58:31.720173   13702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2 +20000M
	I0327 13:58:31.731027   13702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:31.731044   13702 main.go:141] libmachine: STDERR: 
	I0327 13:58:31.731057   13702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:31.731062   13702 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:31.731086   13702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f7:05:5d:19:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:31.732716   13702 main.go:141] libmachine: STDOUT: 
	I0327 13:58:31.732734   13702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:31.732756   13702 client.go:171] duration metric: took 212.571ms to LocalClient.Create
	I0327 13:58:33.735037   13702 start.go:128] duration metric: took 2.239812833s to createHost
	I0327 13:58:33.735144   13702 start.go:83] releasing machines lock for "force-systemd-flag-965000", held for 2.239986833s
	W0327 13:58:33.735271   13702 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:33.747325   13702 out.go:177] * Deleting "force-systemd-flag-965000" in qemu2 ...
	W0327 13:58:33.772580   13702 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:33.772614   13702 start.go:728] Will try again in 5 seconds ...
	I0327 13:58:38.774332   13702 start.go:360] acquireMachinesLock for force-systemd-flag-965000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:38.918758   13702 start.go:364] duration metric: took 144.247833ms to acquireMachinesLock for "force-systemd-flag-965000"
	I0327 13:58:38.918867   13702 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:force-systemd-flag-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:38.919051   13702 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:38.927482   13702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:38.976753   13702 start.go:159] libmachine.API.Create for "force-systemd-flag-965000" (driver="qemu2")
	I0327 13:58:38.976798   13702 client.go:168] LocalClient.Create starting
	I0327 13:58:38.976887   13702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:38.976960   13702 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:38.976986   13702 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:38.977044   13702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:38.977085   13702 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:38.977097   13702 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:38.977578   13702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:39.126564   13702 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:39.217779   13702 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:39.217785   13702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:39.217970   13702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:39.230158   13702 main.go:141] libmachine: STDOUT: 
	I0327 13:58:39.230181   13702 main.go:141] libmachine: STDERR: 
	I0327 13:58:39.230240   13702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2 +20000M
	I0327 13:58:39.241161   13702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:39.241177   13702 main.go:141] libmachine: STDERR: 
	I0327 13:58:39.241189   13702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:39.241193   13702 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:39.241234   13702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:29:f2:2b:f7:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-flag-965000/disk.qcow2
	I0327 13:58:39.242920   13702 main.go:141] libmachine: STDOUT: 
	I0327 13:58:39.242938   13702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:39.242951   13702 client.go:171] duration metric: took 266.151709ms to LocalClient.Create
	I0327 13:58:41.245247   13702 start.go:128] duration metric: took 2.32619525s to createHost
	I0327 13:58:41.245304   13702 start.go:83] releasing machines lock for "force-systemd-flag-965000", held for 2.326545s
	W0327 13:58:41.245727   13702 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:41.259369   13702 out.go:177] 
	W0327 13:58:41.270637   13702 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:58:41.270716   13702 out.go:239] * 
	* 
	W0327 13:58:41.273423   13702 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:58:41.286355   13702 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-965000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-965000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-965000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.020958ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-965000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-965000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-965000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-27 13:58:41.385628 -0700 PDT m=+796.890594126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-965000 -n force-systemd-flag-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-965000 -n force-systemd-flag-965000: exit status 7 (36.105083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-965000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-965000
--- FAIL: TestForceSystemdFlag (10.15s)

                                                
                                    
x
+
TestForceSystemdEnv (10.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-694000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-694000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.841122708s)

                                                
                                                
-- stdout --
	* [force-systemd-env-694000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-694000" primary control-plane node in "force-systemd-env-694000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-694000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:58:26.550995   13671 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:58:26.551130   13671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:26.551133   13671 out.go:304] Setting ErrFile to fd 2...
	I0327 13:58:26.551135   13671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:58:26.551279   13671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:58:26.552374   13671 out.go:298] Setting JSON to false
	I0327 13:58:26.569645   13671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7076,"bootTime":1711566030,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:58:26.569724   13671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:58:26.573718   13671 out.go:177] * [force-systemd-env-694000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:58:26.580746   13671 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:58:26.584726   13671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:58:26.580790   13671 notify.go:220] Checking for updates...
	I0327 13:58:26.590678   13671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:58:26.593733   13671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:58:26.596723   13671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:58:26.599624   13671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0327 13:58:26.603033   13671 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:58:26.603082   13671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:58:26.606716   13671 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:58:26.613684   13671 start.go:297] selected driver: qemu2
	I0327 13:58:26.613691   13671 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:58:26.613698   13671 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:58:26.616069   13671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:58:26.619716   13671 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:58:26.622728   13671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:58:26.622768   13671 cni.go:84] Creating CNI manager for ""
	I0327 13:58:26.622776   13671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:58:26.622782   13671 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:58:26.622811   13671 start.go:340] cluster config:
	{Name:force-systemd-env-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-694000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:58:26.627587   13671 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:58:26.630708   13671 out.go:177] * Starting "force-systemd-env-694000" primary control-plane node in "force-systemd-env-694000" cluster
	I0327 13:58:26.638700   13671 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:58:26.638727   13671 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:58:26.638738   13671 cache.go:56] Caching tarball of preloaded images
	I0327 13:58:26.638837   13671 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:58:26.638852   13671 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:58:26.638914   13671 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/force-systemd-env-694000/config.json ...
	I0327 13:58:26.638927   13671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/force-systemd-env-694000/config.json: {Name:mkaf96752abe895e8f4cf60871d53b1d7695b35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:58:26.639237   13671 start.go:360] acquireMachinesLock for force-systemd-env-694000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:26.639273   13671 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "force-systemd-env-694000"
	I0327 13:58:26.639285   13671 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:force-systemd-env-694000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:26.639322   13671 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:26.642762   13671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:26.659673   13671 start.go:159] libmachine.API.Create for "force-systemd-env-694000" (driver="qemu2")
	I0327 13:58:26.659696   13671 client.go:168] LocalClient.Create starting
	I0327 13:58:26.659752   13671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:26.659781   13671 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:26.659794   13671 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:26.659837   13671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:26.659858   13671 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:26.659865   13671 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:26.660274   13671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:26.796589   13671 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:26.955806   13671 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:26.955821   13671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:26.956012   13671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:26.968960   13671 main.go:141] libmachine: STDOUT: 
	I0327 13:58:26.968980   13671 main.go:141] libmachine: STDERR: 
	I0327 13:58:26.969036   13671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2 +20000M
	I0327 13:58:26.980005   13671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:26.980027   13671 main.go:141] libmachine: STDERR: 
	I0327 13:58:26.980048   13671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:26.980061   13671 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:26.980123   13671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:82:79:82:d7:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:26.981908   13671 main.go:141] libmachine: STDOUT: 
	I0327 13:58:26.981922   13671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:26.981942   13671 client.go:171] duration metric: took 322.245334ms to LocalClient.Create
	I0327 13:58:28.984134   13671 start.go:128] duration metric: took 2.344810375s to createHost
	I0327 13:58:28.984227   13671 start.go:83] releasing machines lock for "force-systemd-env-694000", held for 2.34497375s
	W0327 13:58:28.984337   13671 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:28.994239   13671 out.go:177] * Deleting "force-systemd-env-694000" in qemu2 ...
	W0327 13:58:29.019615   13671 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:29.019646   13671 start.go:728] Will try again in 5 seconds ...
	I0327 13:58:34.021740   13671 start.go:360] acquireMachinesLock for force-systemd-env-694000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:58:34.022212   13671 start.go:364] duration metric: took 386.167µs to acquireMachinesLock for "force-systemd-env-694000"
	I0327 13:58:34.022372   13671 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:force-systemd-env-694000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:58:34.022622   13671 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:58:34.031187   13671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 13:58:34.080266   13671 start.go:159] libmachine.API.Create for "force-systemd-env-694000" (driver="qemu2")
	I0327 13:58:34.080317   13671 client.go:168] LocalClient.Create starting
	I0327 13:58:34.080416   13671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:58:34.080486   13671 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:34.080504   13671 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:34.080574   13671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:58:34.080615   13671 main.go:141] libmachine: Decoding PEM data...
	I0327 13:58:34.080631   13671 main.go:141] libmachine: Parsing certificate...
	I0327 13:58:34.081811   13671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:58:34.236618   13671 main.go:141] libmachine: Creating SSH key...
	I0327 13:58:34.282892   13671 main.go:141] libmachine: Creating Disk image...
	I0327 13:58:34.282898   13671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:58:34.283072   13671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:34.295457   13671 main.go:141] libmachine: STDOUT: 
	I0327 13:58:34.295476   13671 main.go:141] libmachine: STDERR: 
	I0327 13:58:34.295543   13671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2 +20000M
	I0327 13:58:34.306407   13671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:58:34.306424   13671 main.go:141] libmachine: STDERR: 
	I0327 13:58:34.306436   13671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:34.306442   13671 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:58:34.306497   13671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:b5:80:f6:5d:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/force-systemd-env-694000/disk.qcow2
	I0327 13:58:34.308229   13671 main.go:141] libmachine: STDOUT: 
	I0327 13:58:34.308248   13671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:58:34.308263   13671 client.go:171] duration metric: took 227.944833ms to LocalClient.Create
	I0327 13:58:36.310420   13671 start.go:128] duration metric: took 2.287796208s to createHost
	I0327 13:58:36.310497   13671 start.go:83] releasing machines lock for "force-systemd-env-694000", held for 2.288289208s
	W0327 13:58:36.310913   13671 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:58:36.325536   13671 out.go:177] 
	W0327 13:58:36.329567   13671 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:58:36.329595   13671 out.go:239] * 
	* 
	W0327 13:58:36.332292   13671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:58:36.344494   13671 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-694000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-694000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-694000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.621042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-694000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-694000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-694000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-27 13:58:36.442544 -0700 PDT m=+791.947447376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-694000 -n force-systemd-env-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-694000 -n force-systemd-env-694000: exit status 7 (36.1005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-694000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-694000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-694000
--- FAIL: TestForceSystemdEnv (10.06s)

                                                
                                    
x
+
TestErrorSpam/setup (9.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-959000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-959000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 --driver=qemu2 : exit status 80 (9.782141542s)

                                                
                                                
-- stdout --
	* [nospam-959000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-959000" primary control-plane node in "nospam-959000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-959000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-959000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-959000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18158
- KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-959000" primary control-plane node in "nospam-959000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-959000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.78s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.862561333s)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-334000" primary control-plane node in "functional-334000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18158
- KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-334000" primary control-plane node in "functional-334000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-334000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52098 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (71.578459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --alsologtostderr -v=8: exit status 80 (5.198493041s)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-334000" primary control-plane node in "functional-334000" cluster
	* Restarting existing qemu2 VM for "functional-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:47:33.015948   12079 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:47:33.016064   12079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:47:33.016067   12079 out.go:304] Setting ErrFile to fd 2...
	I0327 13:47:33.016069   12079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:47:33.016203   12079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:47:33.017145   12079 out.go:298] Setting JSON to false
	I0327 13:47:33.033446   12079 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6423,"bootTime":1711566030,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:47:33.033512   12079 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:47:33.039026   12079 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:47:33.045913   12079 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:47:33.049922   12079 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:47:33.045983   12079 notify.go:220] Checking for updates...
	I0327 13:47:33.055872   12079 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:47:33.058901   12079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:47:33.061988   12079 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:47:33.064926   12079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:47:33.068227   12079 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:47:33.068279   12079 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:47:33.072926   12079 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:47:33.079920   12079 start.go:297] selected driver: qemu2
	I0327 13:47:33.079928   12079 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:47:33.080009   12079 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:47:33.082263   12079 cni.go:84] Creating CNI manager for ""
	I0327 13:47:33.082283   12079 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:47:33.082333   12079 start.go:340] cluster config:
	{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:47:33.086736   12079 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:47:33.094916   12079 out.go:177] * Starting "functional-334000" primary control-plane node in "functional-334000" cluster
	I0327 13:47:33.102984   12079 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:47:33.103001   12079 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:47:33.103019   12079 cache.go:56] Caching tarball of preloaded images
	I0327 13:47:33.103077   12079 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:47:33.103084   12079 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:47:33.103154   12079 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/functional-334000/config.json ...
	I0327 13:47:33.103650   12079 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:47:33.103677   12079 start.go:364] duration metric: took 20.708µs to acquireMachinesLock for "functional-334000"
	I0327 13:47:33.103686   12079 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:47:33.103691   12079 fix.go:54] fixHost starting: 
	I0327 13:47:33.103813   12079 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
	W0327 13:47:33.103821   12079 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:47:33.112005   12079 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
	I0327 13:47:33.115946   12079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
	I0327 13:47:33.118023   12079 main.go:141] libmachine: STDOUT: 
	I0327 13:47:33.118039   12079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:47:33.118068   12079 fix.go:56] duration metric: took 14.375875ms for fixHost
	I0327 13:47:33.118074   12079 start.go:83] releasing machines lock for "functional-334000", held for 14.39375ms
	W0327 13:47:33.118080   12079 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:47:33.118108   12079 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:47:33.118112   12079 start.go:728] Will try again in 5 seconds ...
	I0327 13:47:38.120200   12079 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:47:38.120526   12079 start.go:364] duration metric: took 241.417µs to acquireMachinesLock for "functional-334000"
	I0327 13:47:38.120667   12079 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:47:38.120691   12079 fix.go:54] fixHost starting: 
	I0327 13:47:38.121360   12079 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
	W0327 13:47:38.121388   12079 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:47:38.128958   12079 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
	I0327 13:47:38.134113   12079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
	I0327 13:47:38.144721   12079 main.go:141] libmachine: STDOUT: 
	I0327 13:47:38.144792   12079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:47:38.144871   12079 fix.go:56] duration metric: took 24.185083ms for fixHost
	I0327 13:47:38.144890   12079 start.go:83] releasing machines lock for "functional-334000", held for 24.343875ms
	W0327 13:47:38.145036   12079 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:47:38.152926   12079 out.go:177] 
	W0327 13:47:38.157020   12079 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:47:38.157045   12079 out.go:239] * 
	* 
	W0327 13:47:38.159718   12079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:47:38.167813   12079 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-334000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.200357625s for "functional-334000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (68.259666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.197ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-334000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.113833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-334000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-334000 get po -A: exit status 1 (26.246625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-334000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-334000\n"*: args "kubectl --context functional-334000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-334000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.323167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl images: exit status 83 (42.999041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.808667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-334000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (47.035417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.816667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 kubectl -- --context functional-334000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 kubectl -- --context functional-334000 get pods: exit status 1 (656.4165ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-334000
	* no server found for cluster "functional-334000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-334000 kubectl -- --context functional-334000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (33.701917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-334000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-334000 get pods: exit status 1 (897.004667ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-334000
	* no server found for cluster "functional-334000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-334000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.161833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.195782125s)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-334000" primary control-plane node in "functional-334000" cluster
	* Restarting existing qemu2 VM for "functional-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.196412917s for "functional-334000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (71.138333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-334000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-334000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.35475ms)

                                                
                                                
** stderr ** 
	error: context "functional-334000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-334000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.677417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 logs: exit status 83 (80.66525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |                     |
	|         | -p download-only-978000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | -o=json --download-only                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | -p download-only-255000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | -o=json --download-only                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | -p download-only-231000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | --download-only -p                                                       | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | binary-mirror-626000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:52074                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-626000                                                  | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| addons  | enable dashboard -p                                                      | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | addons-714000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | addons-714000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-714000 --wait=true                                             | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-714000                                                         | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	| start   | -p nospam-959000 -n=1 --memory=2250 --wait=false                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-959000                                                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
	| cache   | functional-334000 cache delete                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	| ssh     | functional-334000 ssh sudo                                               | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-334000                                                        | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-334000 cache reload                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-334000 kubectl --                                             | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | --context functional-334000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 13:47:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 13:47:47.846612   12169 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:47:47.846728   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:47:47.846729   12169 out.go:304] Setting ErrFile to fd 2...
	I0327 13:47:47.846731   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:47:47.846862   12169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:47:47.847877   12169 out.go:298] Setting JSON to false
	I0327 13:47:47.863983   12169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6437,"bootTime":1711566030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:47:47.864042   12169 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:47:47.869834   12169 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:47:47.879866   12169 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:47:47.884942   12169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:47:47.879936   12169 notify.go:220] Checking for updates...
	I0327 13:47:47.891885   12169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:47:47.894973   12169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:47:47.897970   12169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:47:47.899354   12169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:47:47.903257   12169 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:47:47.903307   12169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:47:47.907954   12169 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:47:47.912958   12169 start.go:297] selected driver: qemu2
	I0327 13:47:47.912961   12169 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:47:47.913017   12169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:47:47.915330   12169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:47:47.915366   12169 cni.go:84] Creating CNI manager for ""
	I0327 13:47:47.915372   12169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:47:47.915412   12169 start.go:340] cluster config:
	{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:47:47.919848   12169 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:47:47.927943   12169 out.go:177] * Starting "functional-334000" primary control-plane node in "functional-334000" cluster
	I0327 13:47:47.935008   12169 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:47:47.935021   12169 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:47:47.935033   12169 cache.go:56] Caching tarball of preloaded images
	I0327 13:47:47.935108   12169 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:47:47.935119   12169 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:47:47.935170   12169 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/functional-334000/config.json ...
	I0327 13:47:47.935855   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:47:47.935890   12169 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "functional-334000"
	I0327 13:47:47.935899   12169 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:47:47.935903   12169 fix.go:54] fixHost starting: 
	I0327 13:47:47.936039   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
	W0327 13:47:47.936047   12169 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:47:47.946929   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
	I0327 13:47:47.950068   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
	I0327 13:47:47.952214   12169 main.go:141] libmachine: STDOUT: 
	I0327 13:47:47.952236   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:47:47.952269   12169 fix.go:56] duration metric: took 16.36675ms for fixHost
	I0327 13:47:47.952281   12169 start.go:83] releasing machines lock for "functional-334000", held for 16.378916ms
	W0327 13:47:47.952287   12169 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:47:47.952318   12169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:47:47.952325   12169 start.go:728] Will try again in 5 seconds ...
	I0327 13:47:52.954447   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:47:52.954792   12169 start.go:364] duration metric: took 268.5µs to acquireMachinesLock for "functional-334000"
	I0327 13:47:52.954907   12169 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:47:52.954959   12169 fix.go:54] fixHost starting: 
	I0327 13:47:52.955647   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
	W0327 13:47:52.955663   12169 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:47:52.962918   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
	I0327 13:47:52.967118   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
	I0327 13:47:52.976767   12169 main.go:141] libmachine: STDOUT: 
	I0327 13:47:52.976818   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:47:52.976899   12169 fix.go:56] duration metric: took 21.945334ms for fixHost
	I0327 13:47:52.976908   12169 start.go:83] releasing machines lock for "functional-334000", held for 22.104666ms
	W0327 13:47:52.977077   12169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:47:52.984965   12169 out.go:177] 
	W0327 13:47:52.989019   12169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:47:52.989042   12169 out.go:239] * 
	W0327 13:47:52.991626   12169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:47:52.997887   12169 out.go:177] 
	
	
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-334000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |                     |
|         | -p download-only-978000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | -o=json --download-only                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | -p download-only-255000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | -o=json --download-only                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | -p download-only-231000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | --download-only -p                                                       | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | binary-mirror-626000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52074                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-626000                                                  | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| addons  | enable dashboard -p                                                      | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | addons-714000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | addons-714000                                                            |                      |         |                |                     |                     |
| start   | -p addons-714000 --wait=true                                             | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-714000                                                         | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| start   | -p nospam-959000 -n=1 --memory=2250 --wait=false                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-959000                                                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
| cache   | functional-334000 cache delete                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| ssh     | functional-334000 ssh sudo                                               | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-334000                                                        | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-334000 cache reload                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-334000 kubectl --                                             | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --context functional-334000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 13:47:47
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 13:47:47.846612   12169 out.go:291] Setting OutFile to fd 1 ...
I0327 13:47:47.846728   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:47.846729   12169 out.go:304] Setting ErrFile to fd 2...
I0327 13:47:47.846731   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:47.846862   12169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:47:47.847877   12169 out.go:298] Setting JSON to false
I0327 13:47:47.863983   12169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6437,"bootTime":1711566030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0327 13:47:47.864042   12169 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 13:47:47.869834   12169 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 13:47:47.879866   12169 out.go:177]   - MINIKUBE_LOCATION=18158
I0327 13:47:47.884942   12169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
I0327 13:47:47.879936   12169 notify.go:220] Checking for updates...
I0327 13:47:47.891885   12169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 13:47:47.894973   12169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 13:47:47.897970   12169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
I0327 13:47:47.899354   12169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 13:47:47.903257   12169 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:47:47.903307   12169 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 13:47:47.907954   12169 out.go:177] * Using the qemu2 driver based on existing profile
I0327 13:47:47.912958   12169 start.go:297] selected driver: qemu2
I0327 13:47:47.912961   12169 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functi
onal-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 13:47:47.913017   12169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 13:47:47.915330   12169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 13:47:47.915366   12169 cni.go:84] Creating CNI manager for ""
I0327 13:47:47.915372   12169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 13:47:47.915412   12169 start.go:340] cluster config:
{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 13:47:47.919848   12169 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 13:47:47.927943   12169 out.go:177] * Starting "functional-334000" primary control-plane node in "functional-334000" cluster
I0327 13:47:47.935008   12169 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 13:47:47.935021   12169 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 13:47:47.935033   12169 cache.go:56] Caching tarball of preloaded images
I0327 13:47:47.935108   12169 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 13:47:47.935119   12169 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 13:47:47.935170   12169 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/functional-334000/config.json ...
I0327 13:47:47.935855   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 13:47:47.935890   12169 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "functional-334000"
I0327 13:47:47.935899   12169 start.go:96] Skipping create...Using existing machine configuration
I0327 13:47:47.935903   12169 fix.go:54] fixHost starting: 
I0327 13:47:47.936039   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
W0327 13:47:47.936047   12169 fix.go:138] unexpected machine state, will restart: <nil>
I0327 13:47:47.946929   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
I0327 13:47:47.950068   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
I0327 13:47:47.952214   12169 main.go:141] libmachine: STDOUT: 
I0327 13:47:47.952236   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 13:47:47.952269   12169 fix.go:56] duration metric: took 16.36675ms for fixHost
I0327 13:47:47.952281   12169 start.go:83] releasing machines lock for "functional-334000", held for 16.378916ms
W0327 13:47:47.952287   12169 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 13:47:47.952318   12169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 13:47:47.952325   12169 start.go:728] Will try again in 5 seconds ...
I0327 13:47:52.954447   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 13:47:52.954792   12169 start.go:364] duration metric: took 268.5µs to acquireMachinesLock for "functional-334000"
I0327 13:47:52.954907   12169 start.go:96] Skipping create...Using existing machine configuration
I0327 13:47:52.954959   12169 fix.go:54] fixHost starting: 
I0327 13:47:52.955647   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
W0327 13:47:52.955663   12169 fix.go:138] unexpected machine state, will restart: <nil>
I0327 13:47:52.962918   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
I0327 13:47:52.967118   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
I0327 13:47:52.976767   12169 main.go:141] libmachine: STDOUT: 
I0327 13:47:52.976818   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 13:47:52.976899   12169 fix.go:56] duration metric: took 21.945334ms for fixHost
I0327 13:47:52.976908   12169 start.go:83] releasing machines lock for "functional-334000", held for 22.104666ms
W0327 13:47:52.977077   12169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 13:47:52.984965   12169 out.go:177] 
W0327 13:47:52.989019   12169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 13:47:52.989042   12169 out.go:239] * 
W0327 13:47:52.991626   12169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 13:47:52.997887   12169 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4174155117/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |                     |
|         | -p download-only-978000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | -o=json --download-only                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | -p download-only-255000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | -o=json --download-only                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | -p download-only-231000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-978000                                                  | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-255000                                                  | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| delete  | -p download-only-231000                                                  | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| start   | --download-only -p                                                       | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | binary-mirror-626000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52074                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-626000                                                  | binary-mirror-626000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
| addons  | enable dashboard -p                                                      | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | addons-714000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | addons-714000                                                            |                      |         |                |                     |                     |
| start   | -p addons-714000 --wait=true                                             | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-714000                                                         | addons-714000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| start   | -p nospam-959000 -n=1 --memory=2250 --wait=false                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-959000 --log_dir                                                  | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-959000                                                         | nospam-959000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-334000 cache add                                              | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
| cache   | functional-334000 cache delete                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | minikube-local-cache-test:functional-334000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| ssh     | functional-334000 ssh sudo                                               | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-334000                                                        | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-334000 cache reload                                           | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
| ssh     | functional-334000 ssh                                                    | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT | 27 Mar 24 13:47 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-334000 kubectl --                                             | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --context functional-334000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-334000                                                     | functional-334000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:47 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 13:47:47
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 13:47:47.846612   12169 out.go:291] Setting OutFile to fd 1 ...
I0327 13:47:47.846728   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:47.846729   12169 out.go:304] Setting ErrFile to fd 2...
I0327 13:47:47.846731   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:47.846862   12169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:47:47.847877   12169 out.go:298] Setting JSON to false
I0327 13:47:47.863983   12169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6437,"bootTime":1711566030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0327 13:47:47.864042   12169 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 13:47:47.869834   12169 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 13:47:47.879866   12169 out.go:177]   - MINIKUBE_LOCATION=18158
I0327 13:47:47.884942   12169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
I0327 13:47:47.879936   12169 notify.go:220] Checking for updates...
I0327 13:47:47.891885   12169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 13:47:47.894973   12169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 13:47:47.897970   12169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
I0327 13:47:47.899354   12169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 13:47:47.903257   12169 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:47:47.903307   12169 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 13:47:47.907954   12169 out.go:177] * Using the qemu2 driver based on existing profile
I0327 13:47:47.912958   12169 start.go:297] selected driver: qemu2
I0327 13:47:47.912961   12169 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functi
onal-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 13:47:47.913017   12169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 13:47:47.915330   12169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 13:47:47.915366   12169 cni.go:84] Creating CNI manager for ""
I0327 13:47:47.915372   12169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 13:47:47.915412   12169 start.go:340] cluster config:
{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 13:47:47.919848   12169 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 13:47:47.927943   12169 out.go:177] * Starting "functional-334000" primary control-plane node in "functional-334000" cluster
I0327 13:47:47.935008   12169 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 13:47:47.935021   12169 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 13:47:47.935033   12169 cache.go:56] Caching tarball of preloaded images
I0327 13:47:47.935108   12169 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 13:47:47.935119   12169 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 13:47:47.935170   12169 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/functional-334000/config.json ...
I0327 13:47:47.935855   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 13:47:47.935890   12169 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "functional-334000"
I0327 13:47:47.935899   12169 start.go:96] Skipping create...Using existing machine configuration
I0327 13:47:47.935903   12169 fix.go:54] fixHost starting: 
I0327 13:47:47.936039   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
W0327 13:47:47.936047   12169 fix.go:138] unexpected machine state, will restart: <nil>
I0327 13:47:47.946929   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
I0327 13:47:47.950068   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
I0327 13:47:47.952214   12169 main.go:141] libmachine: STDOUT: 
I0327 13:47:47.952236   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 13:47:47.952269   12169 fix.go:56] duration metric: took 16.36675ms for fixHost
I0327 13:47:47.952281   12169 start.go:83] releasing machines lock for "functional-334000", held for 16.378916ms
W0327 13:47:47.952287   12169 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 13:47:47.952318   12169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 13:47:47.952325   12169 start.go:728] Will try again in 5 seconds ...
I0327 13:47:52.954447   12169 start.go:360] acquireMachinesLock for functional-334000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 13:47:52.954792   12169 start.go:364] duration metric: took 268.5µs to acquireMachinesLock for "functional-334000"
I0327 13:47:52.954907   12169 start.go:96] Skipping create...Using existing machine configuration
I0327 13:47:52.954959   12169 fix.go:54] fixHost starting: 
I0327 13:47:52.955647   12169 fix.go:112] recreateIfNeeded on functional-334000: state=Stopped err=<nil>
W0327 13:47:52.955663   12169 fix.go:138] unexpected machine state, will restart: <nil>
I0327 13:47:52.962918   12169 out.go:177] * Restarting existing qemu2 VM for "functional-334000" ...
I0327 13:47:52.967118   12169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:7a:cd:0d:7e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/functional-334000/disk.qcow2
I0327 13:47:52.976767   12169 main.go:141] libmachine: STDOUT: 
I0327 13:47:52.976818   12169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 13:47:52.976899   12169 fix.go:56] duration metric: took 21.945334ms for fixHost
I0327 13:47:52.976908   12169 start.go:83] releasing machines lock for "functional-334000", held for 22.104666ms
W0327 13:47:52.977077   12169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 13:47:52.984965   12169 out.go:177] 
W0327 13:47:52.989019   12169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 13:47:52.989042   12169 out.go:239] * 
W0327 13:47:52.991626   12169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 13:47:52.997887   12169 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-334000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.328666ms)

                                                
                                                
** stderr ** 
	error: context "functional-334000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-334000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1] stderr:
I0327 13:48:47.244173   12515 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.244633   12515 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.244637   12515 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.244639   12515 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.244779   12515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.244986   12515 mustload.go:65] Loading cluster: functional-334000
I0327 13:48:47.245182   12515 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.249110   12515 out.go:177] * The control-plane node functional-334000 host is not running: state=Stopped
I0327 13:48:47.253177   12515 out.go:177]   To start a cluster, run: "minikube start -p functional-334000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (44.244084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 status: exit status 7 (32.050958ms)

                                                
                                                
-- stdout --
	functional-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-334000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.203ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-334000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 status -o json: exit status 7 (32.045958ms)

                                                
                                                
-- stdout --
	{"Name":"functional-334000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-334000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (31.980875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-334000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.19225ms)

                                                
                                                
** stderr ** 
	error: context "functional-334000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-334000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-334000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-334000 describe po hello-node-connect: exit status 1 (25.969833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-334000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-334000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-334000 logs -l app=hello-node-connect: exit status 1 (26.305292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-334000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-334000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-334000 describe svc hello-node-connect: exit status 1 (26.351458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-334000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (31.974667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-334000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.072209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "echo hello": exit status 83 (41.989459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n"*. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "cat /etc/hostname": exit status 83 (41.90225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-334000"- but got *"* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n"*. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (31.389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.633208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.132ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-334000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-334000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp functional-334000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2183882095/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 cp functional-334000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2183882095/001/cp-test.txt: exit status 83 (40.812709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 cp functional-334000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2183882095/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.38475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2183882095/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.69625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.014792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-334000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-334000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11752/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/test/nested/copy/11752/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/test/nested/copy/11752/hosts": exit status 83 (42.444541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/test/nested/copy/11752/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-334000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-334000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.642417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11752.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/11752.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/11752.pem": exit status 83 (46.0565ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/11752.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /etc/ssl/certs/11752.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/11752.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11752.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/11752.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/11752.pem": exit status 83 (51.631417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/11752.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /usr/share/ca-certificates/11752.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/11752.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (42.779167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/117522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/117522.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/117522.pem": exit status 83 (39.734166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/117522.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /etc/ssl/certs/117522.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/117522.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/117522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/117522.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/117522.pem": exit status 83 (41.628042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/117522.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /usr/share/ca-certificates/117522.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/117522.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.726959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-334000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-334000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.052417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-334000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-334000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.662375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-334000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000: exit status 7 (32.556042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo systemctl is-active crio": exit status 83 (39.38425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 version -o=json --components: exit status 83 (43.948458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr:
I0327 13:48:47.665795   12530 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.665945   12530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.665949   12530 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.665951   12530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.666081   12530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.666494   12530 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.666551   12530 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr:
I0327 13:48:47.904656   12542 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.904792   12542 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.904795   12542 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.904797   12542 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.904921   12542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.905323   12542 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.905388   12542 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr:
I0327 13:48:47.867767   12540 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.867904   12540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.867908   12540 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.867910   12540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.868025   12540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.868445   12540 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.868504   12540 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr:
I0327 13:48:47.829413   12538 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.829561   12538 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.829564   12538 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.829566   12538 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.829687   12538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.830130   12538 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.830190   12538 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh pgrep buildkitd: exit status 83 (48.8975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build --alsologtostderr:
I0327 13:48:47.752588   12534 out.go:291] Setting OutFile to fd 1 ...
I0327 13:48:47.753390   12534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.753396   12534 out.go:304] Setting ErrFile to fd 2...
I0327 13:48:47.753399   12534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:48:47.753566   12534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:48:47.753990   12534 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.754453   12534 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:48:47.754694   12534 build_images.go:133] succeeded building to: 
I0327 13:48:47.754698   12534 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
functional_test.go:442: expected "localhost/my-image:functional-334000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-334000 docker-env) && out/minikube-darwin-arm64 status -p functional-334000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-334000 docker-env) && out/minikube-darwin-arm64 status -p functional-334000": exit status 1 (48.092875ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2: exit status 83 (44.744333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:48:47.528006   12524 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:48:47.528995   12524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.528999   12524 out.go:304] Setting ErrFile to fd 2...
	I0327 13:48:47.529002   12524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.529162   12524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:48:47.529380   12524 mustload.go:65] Loading cluster: functional-334000
	I0327 13:48:47.529575   12524 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:48:47.533409   12524 out.go:177] * The control-plane node functional-334000 host is not running: state=Stopped
	I0327 13:48:47.537265   12524 out.go:177]   To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2: exit status 83 (44.738583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:48:47.620991   12528 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:48:47.621151   12528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.621158   12528 out.go:304] Setting ErrFile to fd 2...
	I0327 13:48:47.621161   12528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.621304   12528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:48:47.621550   12528 mustload.go:65] Loading cluster: functional-334000
	I0327 13:48:47.621736   12528 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:48:47.625336   12528 out.go:177] * The control-plane node functional-334000 host is not running: state=Stopped
	I0327 13:48:47.629162   12528 out.go:177]   To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2: exit status 83 (47.717042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:48:47.572843   12526 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:48:47.573004   12526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.573007   12526 out.go:304] Setting ErrFile to fd 2...
	I0327 13:48:47.573009   12526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.573147   12526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:48:47.573359   12526 mustload.go:65] Loading cluster: functional-334000
	I0327 13:48:47.573561   12526 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:48:47.578144   12526 out.go:177] * The control-plane node functional-334000 host is not running: state=Stopped
	I0327 13:48:47.586298   12526 out.go:177]   To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-334000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.791375ms)

                                                
                                                
** stderr ** 
	error: context "functional-334000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-334000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 service list: exit status 83 (45.883417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-334000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 service list -o json: exit status 83 (43.901042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-334000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 service --namespace=default --https --url hello-node: exit status 83 (43.891875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-334000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 service hello-node --url --format={{.IP}}: exit status 83 (47.829375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-334000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 service hello-node --url: exit status 83 (44.8255ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-334000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test.go:1565: failed to parse "* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"": parse "* The control-plane node functional-334000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-334000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0327 13:47:55.852531   12287 out.go:291] Setting OutFile to fd 1 ...
I0327 13:47:55.852722   12287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:55.852724   12287 out.go:304] Setting ErrFile to fd 2...
I0327 13:47:55.852727   12287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:47:55.852861   12287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:47:55.853055   12287 mustload.go:65] Loading cluster: functional-334000
I0327 13:47:55.853248   12287 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:47:55.858937   12287 out.go:177] * The control-plane node functional-334000 host is not running: state=Stopped
I0327 13:47:55.874914   12287 out.go:177]   To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
stdout: * The control-plane node functional-334000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-334000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 12288: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-334000": client config: context "functional-334000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (105.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-334000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-334000 get svc nginx-svc: exit status 1 (68.7095ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-334000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-334000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (105.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr: (1.310583459s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-334000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr: (1.308990083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-334000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.195115792s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-334000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr: (1.175034625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-334000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image save gcr.io/google-containers/addon-resizer:functional-334000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-334000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036784042s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-746000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-746000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.965238583s)

                                                
                                                
-- stdout --
	* [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-746000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:50:29.249345   12616 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:50:29.249468   12616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:50:29.249474   12616 out.go:304] Setting ErrFile to fd 2...
	I0327 13:50:29.249484   12616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:50:29.249615   12616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:50:29.250731   12616 out.go:298] Setting JSON to false
	I0327 13:50:29.266987   12616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6599,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:50:29.267056   12616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:50:29.273635   12616 out.go:177] * [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:50:29.279520   12616 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:50:29.283594   12616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:50:29.279557   12616 notify.go:220] Checking for updates...
	I0327 13:50:29.289551   12616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:50:29.292628   12616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:50:29.294177   12616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:50:29.297547   12616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:50:29.300707   12616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:50:29.304412   12616 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:50:29.311555   12616 start.go:297] selected driver: qemu2
	I0327 13:50:29.311562   12616 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:50:29.311569   12616 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:50:29.313928   12616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:50:29.317604   12616 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:50:29.320691   12616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:50:29.320728   12616 cni.go:84] Creating CNI manager for ""
	I0327 13:50:29.320733   12616 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 13:50:29.320736   12616 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 13:50:29.320772   12616 start.go:340] cluster config:
	{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var
/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:50:29.325260   12616 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:50:29.331457   12616 out.go:177] * Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	I0327 13:50:29.335642   12616 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:50:29.335657   12616 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:50:29.335669   12616 cache.go:56] Caching tarball of preloaded images
	I0327 13:50:29.335724   12616 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:50:29.335730   12616 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:50:29.335980   12616 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/ha-746000/config.json ...
	I0327 13:50:29.335993   12616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/ha-746000/config.json: {Name:mkb155933bba17418db8e3a01bdd74bc872b0b16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:50:29.336223   12616 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:50:29.336257   12616 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "ha-746000"
	I0327 13:50:29.336269   12616 start.go:93] Provisioning new machine with config: &{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:50:29.336302   12616 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:50:29.344525   12616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:50:29.362826   12616 start.go:159] libmachine.API.Create for "ha-746000" (driver="qemu2")
	I0327 13:50:29.362850   12616 client.go:168] LocalClient.Create starting
	I0327 13:50:29.362904   12616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:50:29.362935   12616 main.go:141] libmachine: Decoding PEM data...
	I0327 13:50:29.362952   12616 main.go:141] libmachine: Parsing certificate...
	I0327 13:50:29.362996   12616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:50:29.363023   12616 main.go:141] libmachine: Decoding PEM data...
	I0327 13:50:29.363031   12616 main.go:141] libmachine: Parsing certificate...
	I0327 13:50:29.363351   12616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:50:29.502399   12616 main.go:141] libmachine: Creating SSH key...
	I0327 13:50:29.548181   12616 main.go:141] libmachine: Creating Disk image...
	I0327 13:50:29.548187   12616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:50:29.548352   12616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:29.560562   12616 main.go:141] libmachine: STDOUT: 
	I0327 13:50:29.560584   12616 main.go:141] libmachine: STDERR: 
	I0327 13:50:29.560633   12616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2 +20000M
	I0327 13:50:29.571407   12616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:50:29.571439   12616 main.go:141] libmachine: STDERR: 
	I0327 13:50:29.571452   12616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:29.571458   12616 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:50:29.571492   12616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:36:d7:6e:ea:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:29.573307   12616 main.go:141] libmachine: STDOUT: 
	I0327 13:50:29.573326   12616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:50:29.573340   12616 client.go:171] duration metric: took 210.488292ms to LocalClient.Create
	I0327 13:50:31.575591   12616 start.go:128] duration metric: took 2.239280709s to createHost
	I0327 13:50:31.575682   12616 start.go:83] releasing machines lock for "ha-746000", held for 2.239442166s
	W0327 13:50:31.575736   12616 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:50:31.591724   12616 out.go:177] * Deleting "ha-746000" in qemu2 ...
	W0327 13:50:31.616164   12616 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:50:31.616199   12616 start.go:728] Will try again in 5 seconds ...
	I0327 13:50:36.618389   12616 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:50:36.618727   12616 start.go:364] duration metric: took 254.917µs to acquireMachinesLock for "ha-746000"
	I0327 13:50:36.618856   12616 start.go:93] Provisioning new machine with config: &{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:50:36.619091   12616 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:50:36.627678   12616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:50:36.677841   12616 start.go:159] libmachine.API.Create for "ha-746000" (driver="qemu2")
	I0327 13:50:36.677900   12616 client.go:168] LocalClient.Create starting
	I0327 13:50:36.678026   12616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:50:36.678126   12616 main.go:141] libmachine: Decoding PEM data...
	I0327 13:50:36.678144   12616 main.go:141] libmachine: Parsing certificate...
	I0327 13:50:36.678213   12616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:50:36.678261   12616 main.go:141] libmachine: Decoding PEM data...
	I0327 13:50:36.678278   12616 main.go:141] libmachine: Parsing certificate...
	I0327 13:50:36.678877   12616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:50:36.829325   12616 main.go:141] libmachine: Creating SSH key...
	I0327 13:50:37.112280   12616 main.go:141] libmachine: Creating Disk image...
	I0327 13:50:37.112288   12616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:50:37.112518   12616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:37.125688   12616 main.go:141] libmachine: STDOUT: 
	I0327 13:50:37.125708   12616 main.go:141] libmachine: STDERR: 
	I0327 13:50:37.125788   12616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2 +20000M
	I0327 13:50:37.136449   12616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:50:37.136473   12616 main.go:141] libmachine: STDERR: 
	I0327 13:50:37.136486   12616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:37.136490   12616 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:50:37.136518   12616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c9:2c:83:96:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:50:37.138263   12616 main.go:141] libmachine: STDOUT: 
	I0327 13:50:37.138285   12616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:50:37.138298   12616 client.go:171] duration metric: took 460.3975ms to LocalClient.Create
	I0327 13:50:39.140450   12616 start.go:128] duration metric: took 2.521358417s to createHost
	I0327 13:50:39.140499   12616 start.go:83] releasing machines lock for "ha-746000", held for 2.521779958s
	W0327 13:50:39.140892   12616 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:50:39.150420   12616 out.go:177] 
	W0327 13:50:39.155665   12616 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:50:39.155719   12616 out.go:239] * 
	* 
	W0327 13:50:39.158358   12616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:50:39.168522   12616 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-746000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (68.728583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.743875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-746000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- rollout status deployment/busybox: exit status 1 (58.620875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.69625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.307708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.29475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.598792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.357084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.871041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.427167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.585792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.685458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.663875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.322125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.802875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.657291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.982083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.154709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.798166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-746000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.518916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-746000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.974875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-746000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-746000 -v=7 --alsologtostderr: exit status 83 (46.466542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.293769   12755 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.294340   12755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.294343   12755 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.294351   12755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.294534   12755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.294754   12755 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.294947   12755 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.299759   12755 out.go:177] * The control-plane node ha-746000 host is not running: state=Stopped
	I0327 13:52:34.303693   12755 out.go:177]   To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-746000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.64975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-746000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-746000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.860084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-746000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-746000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-746000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.395125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-746000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-746000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.381583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status --output json -v=7 --alsologtostderr: exit status 7 (32.207459ms)

                                                
                                                
-- stdout --
	{"Name":"ha-746000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.538986   12768 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.539110   12768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.539114   12768 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.539116   12768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.539242   12768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.539377   12768 out.go:298] Setting JSON to true
	I0327 13:52:34.539389   12768 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.539438   12768 notify.go:220] Checking for updates...
	I0327 13:52:34.539590   12768 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.539597   12768 status.go:255] checking status of ha-746000 ...
	I0327 13:52:34.539800   12768 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:34.539804   12768 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:34.539806   12768 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-746000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.581958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.704917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.604471   12772 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.604867   12772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.604870   12772 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.604872   12772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.605030   12772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.605276   12772 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.605455   12772 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.609566   12772 out.go:177] 
	W0327 13:52:34.612771   12772 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 13:52:34.612775   12772 out.go:239] * 
	* 
	W0327 13:52:34.615091   12772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:52:34.619739   12772 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-746000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (32.186ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.655137   12774 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.655308   12774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.655311   12774 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.655314   12774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.655441   12774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.655563   12774 out.go:298] Setting JSON to false
	I0327 13:52:34.655578   12774 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.655640   12774 notify.go:220] Checking for updates...
	I0327 13:52:34.655807   12774 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.655815   12774 status.go:255] checking status of ha-746000 ...
	I0327 13:52:34.656020   12774 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:34.656023   12774 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:34.656025   12774 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.228041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-746000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.632208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.797875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.825664   12784 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.826050   12784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.826053   12784 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.826056   12784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.826220   12784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.826458   12784 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.826648   12784 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.831340   12784 out.go:177] 
	W0327 13:52:34.834386   12784 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 13:52:34.834390   12784 out.go:239] * 
	* 
	W0327 13:52:34.836349   12784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:52:34.841286   12784 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0327 13:52:34.825664   12784 out.go:291] Setting OutFile to fd 1 ...
I0327 13:52:34.826050   12784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:52:34.826053   12784 out.go:304] Setting ErrFile to fd 2...
I0327 13:52:34.826056   12784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:52:34.826220   12784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:52:34.826458   12784 mustload.go:65] Loading cluster: ha-746000
I0327 13:52:34.826648   12784 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:52:34.831340   12784 out.go:177] 
W0327 13:52:34.834386   12784 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0327 13:52:34.834390   12784 out.go:239] * 
* 
W0327 13:52:34.836349   12784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 13:52:34.841286   12784 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-746000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (32.28925ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:34.876907   12786 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:34.877056   12786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.877060   12786 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:34.877062   12786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:34.877203   12786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:34.877334   12786 out.go:298] Setting JSON to false
	I0327 13:52:34.877345   12786 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:34.877410   12786 notify.go:220] Checking for updates...
	I0327 13:52:34.877541   12786 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:34.877546   12786 status.go:255] checking status of ha-746000 ...
	I0327 13:52:34.877740   12786 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:34.877743   12786 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:34.877745   12786 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (77.2835ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:36.018372   12788 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:36.018558   12788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:36.018562   12788 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:36.018565   12788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:36.018706   12788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:36.018852   12788 out.go:298] Setting JSON to false
	I0327 13:52:36.018866   12788 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:36.018909   12788 notify.go:220] Checking for updates...
	I0327 13:52:36.019102   12788 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:36.019109   12788 status.go:255] checking status of ha-746000 ...
	I0327 13:52:36.019356   12788 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:36.019361   12788 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:36.019364   12788 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (75.528166ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:38.078113   12790 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:38.078304   12790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:38.078308   12790 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:38.078311   12790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:38.078498   12790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:38.078660   12790 out.go:298] Setting JSON to false
	I0327 13:52:38.078676   12790 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:38.078715   12790 notify.go:220] Checking for updates...
	I0327 13:52:38.078956   12790 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:38.078967   12790 status.go:255] checking status of ha-746000 ...
	I0327 13:52:38.079235   12790 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:38.079240   12790 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:38.079243   12790 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (75.707167ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:39.843029   12797 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:39.843192   12797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:39.843196   12797 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:39.843200   12797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:39.843365   12797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:39.843521   12797 out.go:298] Setting JSON to false
	I0327 13:52:39.843537   12797 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:39.843574   12797 notify.go:220] Checking for updates...
	I0327 13:52:39.843794   12797 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:39.843800   12797 status.go:255] checking status of ha-746000 ...
	I0327 13:52:39.844042   12797 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:39.844047   12797 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:39.844050   12797 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (75.530125ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:42.906920   12800 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:42.907138   12800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:42.907143   12800 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:42.907146   12800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:42.907308   12800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:42.907799   12800 out.go:298] Setting JSON to false
	I0327 13:52:42.907817   12800 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:42.908170   12800 notify.go:220] Checking for updates...
	I0327 13:52:42.908358   12800 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:42.908396   12800 status.go:255] checking status of ha-746000 ...
	I0327 13:52:42.908844   12800 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:42.908851   12800 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:42.908854   12800 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (76.281208ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:52:49.369614   12802 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:52:49.369815   12802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:49.369820   12802 out.go:304] Setting ErrFile to fd 2...
	I0327 13:52:49.369823   12802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:52:49.369982   12802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:52:49.370132   12802 out.go:298] Setting JSON to false
	I0327 13:52:49.370147   12802 mustload.go:65] Loading cluster: ha-746000
	I0327 13:52:49.370183   12802 notify.go:220] Checking for updates...
	I0327 13:52:49.370394   12802 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:52:49.370402   12802 status.go:255] checking status of ha-746000 ...
	I0327 13:52:49.370684   12802 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:52:49.370689   12802 status.go:343] host is not running, skipping remaining checks
	I0327 13:52:49.370692   12802 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (76.104708ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:00.424518   12811 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:00.424691   12811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:00.424695   12811 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:00.424698   12811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:00.424854   12811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:00.425029   12811 out.go:298] Setting JSON to false
	I0327 13:53:00.425045   12811 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:00.425078   12811 notify.go:220] Checking for updates...
	I0327 13:53:00.425313   12811 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:00.425320   12811 status.go:255] checking status of ha-746000 ...
	I0327 13:53:00.425589   12811 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:53:00.425593   12811 status.go:343] host is not running, skipping remaining checks
	I0327 13:53:00.425596   12811 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (76.097166ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:16.021986   12818 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:16.022177   12818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:16.022182   12818 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:16.022185   12818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:16.022345   12818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:16.022507   12818 out.go:298] Setting JSON to false
	I0327 13:53:16.022523   12818 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:16.022556   12818 notify.go:220] Checking for updates...
	I0327 13:53:16.022792   12818 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:16.022803   12818 status.go:255] checking status of ha-746000 ...
	I0327 13:53:16.023065   12818 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:53:16.023070   12818 status.go:343] host is not running, skipping remaining checks
	I0327 13:53:16.023073   12818 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (34.593542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (41.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-746000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-746000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.38575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-746000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-746000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-746000 -v=7 --alsologtostderr: (3.544720667s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-746000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-746000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.221048667s)

                                                
                                                
-- stdout --
	* [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	* Restarting existing qemu2 VM for "ha-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:19.807174   12850 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:19.807340   12850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:19.807344   12850 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:19.807347   12850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:19.807500   12850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:19.808664   12850 out.go:298] Setting JSON to false
	I0327 13:53:19.827108   12850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6769,"bootTime":1711566030,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:53:19.827178   12850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:53:19.831728   12850 out.go:177] * [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:53:19.838649   12850 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:53:19.841528   12850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:53:19.838712   12850 notify.go:220] Checking for updates...
	I0327 13:53:19.844641   12850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:53:19.847648   12850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:53:19.849108   12850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:53:19.852637   12850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:53:19.855899   12850 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:19.855962   12850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:53:19.860475   12850 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:53:19.867605   12850 start.go:297] selected driver: qemu2
	I0327 13:53:19.867613   12850 start.go:901] validating driver "qemu2" against &{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:53:19.867675   12850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:53:19.869896   12850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:53:19.869944   12850 cni.go:84] Creating CNI manager for ""
	I0327 13:53:19.869949   12850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 13:53:19.870001   12850 start.go:340] cluster config:
	{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:53:19.874474   12850 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:53:19.881622   12850 out.go:177] * Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	I0327 13:53:19.885630   12850 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:53:19.885643   12850 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:53:19.885651   12850 cache.go:56] Caching tarball of preloaded images
	I0327 13:53:19.885710   12850 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:53:19.885715   12850 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:53:19.885776   12850 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/ha-746000/config.json ...
	I0327 13:53:19.886252   12850 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:53:19.886292   12850 start.go:364] duration metric: took 33.167µs to acquireMachinesLock for "ha-746000"
	I0327 13:53:19.886303   12850 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:53:19.886307   12850 fix.go:54] fixHost starting: 
	I0327 13:53:19.886433   12850 fix.go:112] recreateIfNeeded on ha-746000: state=Stopped err=<nil>
	W0327 13:53:19.886442   12850 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:53:19.889684   12850 out.go:177] * Restarting existing qemu2 VM for "ha-746000" ...
	I0327 13:53:19.896663   12850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c9:2c:83:96:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:53:19.898859   12850 main.go:141] libmachine: STDOUT: 
	I0327 13:53:19.898879   12850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:53:19.898912   12850 fix.go:56] duration metric: took 12.604125ms for fixHost
	I0327 13:53:19.898916   12850 start.go:83] releasing machines lock for "ha-746000", held for 12.619458ms
	W0327 13:53:19.898923   12850 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:53:19.898953   12850 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:53:19.898958   12850 start.go:728] Will try again in 5 seconds ...
	I0327 13:53:24.900046   12850 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:53:24.900344   12850 start.go:364] duration metric: took 224.958µs to acquireMachinesLock for "ha-746000"
	I0327 13:53:24.900468   12850 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:53:24.900492   12850 fix.go:54] fixHost starting: 
	I0327 13:53:24.901182   12850 fix.go:112] recreateIfNeeded on ha-746000: state=Stopped err=<nil>
	W0327 13:53:24.901209   12850 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:53:24.906551   12850 out.go:177] * Restarting existing qemu2 VM for "ha-746000" ...
	I0327 13:53:24.914757   12850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c9:2c:83:96:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:53:24.924637   12850 main.go:141] libmachine: STDOUT: 
	I0327 13:53:24.924704   12850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:53:24.924770   12850 fix.go:56] duration metric: took 24.283791ms for fixHost
	I0327 13:53:24.924786   12850 start.go:83] releasing machines lock for "ha-746000", held for 24.418625ms
	W0327 13:53:24.924975   12850 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:53:24.932511   12850 out.go:177] 
	W0327 13:53:24.936648   12850 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:53:24.936679   12850 out.go:239] * 
	* 
	W0327 13:53:24.939025   12850 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:53:24.946500   12850 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-746000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-746000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (34.226042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 node delete m03 -v=7 --alsologtostderr: exit status 83 (45.018709ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:25.097591   12862 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:25.098007   12862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:25.098010   12862 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:25.098013   12862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:25.098198   12862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:25.098405   12862 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:25.098617   12862 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:25.103405   12862 out.go:177] * The control-plane node ha-746000 host is not running: state=Stopped
	I0327 13:53:25.107450   12862 out.go:177]   To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-746000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (32.557125ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:25.142956   12864 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:25.143103   12864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:25.143106   12864 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:25.143108   12864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:25.143242   12864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:25.143369   12864 out.go:298] Setting JSON to false
	I0327 13:53:25.143381   12864 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:25.143428   12864 notify.go:220] Checking for updates...
	I0327 13:53:25.143572   12864 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:25.143578   12864 status.go:255] checking status of ha-746000 ...
	I0327 13:53:25.143783   12864 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:53:25.143787   12864 status.go:343] host is not running, skipping remaining checks
	I0327 13:53:25.143790   12864 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (32.174208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-746000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.253333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-746000 stop -v=7 --alsologtostderr: (3.300419417s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr: exit status 7 (66.411541ms)

                                                
                                                
-- stdout --
	ha-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:28.648906   12892 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:28.649055   12892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:28.649059   12892 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:28.649062   12892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:28.649218   12892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:28.649362   12892 out.go:298] Setting JSON to false
	I0327 13:53:28.649379   12892 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:28.649419   12892 notify.go:220] Checking for updates...
	I0327 13:53:28.649652   12892 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:28.649659   12892 status.go:255] checking status of ha-746000 ...
	I0327 13:53:28.649910   12892 status.go:330] ha-746000 host status = "Stopped" (err=<nil>)
	I0327 13:53:28.649915   12892 status.go:343] host is not running, skipping remaining checks
	I0327 13:53:28.649917   12892 status.go:257] ha-746000 status: &{Name:ha-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-746000 status -v=7 --alsologtostderr": ha-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (33.491583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-746000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-746000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.191178792s)

                                                
                                                
-- stdout --
	* [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	* Restarting existing qemu2 VM for "ha-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:28.714349   12896 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:28.714473   12896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:28.714476   12896 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:28.714479   12896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:28.714611   12896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:28.715558   12896 out.go:298] Setting JSON to false
	I0327 13:53:28.731562   12896 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6778,"bootTime":1711566030,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:53:28.731634   12896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:53:28.737011   12896 out.go:177] * [ha-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:53:28.745940   12896 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:53:28.749972   12896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:53:28.745990   12896 notify.go:220] Checking for updates...
	I0327 13:53:28.755880   12896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:53:28.758964   12896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:53:28.761848   12896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:53:28.764903   12896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:53:28.768259   12896 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:28.768537   12896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:53:28.772883   12896 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:53:28.779930   12896 start.go:297] selected driver: qemu2
	I0327 13:53:28.779936   12896 start.go:901] validating driver "qemu2" against &{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:53:28.780009   12896 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:53:28.782299   12896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:53:28.782347   12896 cni.go:84] Creating CNI manager for ""
	I0327 13:53:28.782353   12896 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 13:53:28.782408   12896 start.go:340] cluster config:
	{Name:ha-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:53:28.786709   12896 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:53:28.793919   12896 out.go:177] * Starting "ha-746000" primary control-plane node in "ha-746000" cluster
	I0327 13:53:28.797919   12896 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:53:28.797935   12896 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:53:28.797945   12896 cache.go:56] Caching tarball of preloaded images
	I0327 13:53:28.798002   12896 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:53:28.798007   12896 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:53:28.798074   12896 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/ha-746000/config.json ...
	I0327 13:53:28.798560   12896 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:53:28.798585   12896 start.go:364] duration metric: took 20.041µs to acquireMachinesLock for "ha-746000"
	I0327 13:53:28.798594   12896 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:53:28.798601   12896 fix.go:54] fixHost starting: 
	I0327 13:53:28.798713   12896 fix.go:112] recreateIfNeeded on ha-746000: state=Stopped err=<nil>
	W0327 13:53:28.798723   12896 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:53:28.802876   12896 out.go:177] * Restarting existing qemu2 VM for "ha-746000" ...
	I0327 13:53:28.810887   12896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c9:2c:83:96:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:53:28.812936   12896 main.go:141] libmachine: STDOUT: 
	I0327 13:53:28.812958   12896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:53:28.812990   12896 fix.go:56] duration metric: took 14.388875ms for fixHost
	I0327 13:53:28.812996   12896 start.go:83] releasing machines lock for "ha-746000", held for 14.406292ms
	W0327 13:53:28.813001   12896 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:53:28.813033   12896 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:53:28.813038   12896 start.go:728] Will try again in 5 seconds ...
	I0327 13:53:33.815183   12896 start.go:360] acquireMachinesLock for ha-746000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:53:33.815426   12896 start.go:364] duration metric: took 153.625µs to acquireMachinesLock for "ha-746000"
	I0327 13:53:33.815502   12896 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:53:33.815514   12896 fix.go:54] fixHost starting: 
	I0327 13:53:33.815898   12896 fix.go:112] recreateIfNeeded on ha-746000: state=Stopped err=<nil>
	W0327 13:53:33.815917   12896 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:53:33.824725   12896 out.go:177] * Restarting existing qemu2 VM for "ha-746000" ...
	I0327 13:53:33.829105   12896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c9:2c:83:96:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/ha-746000/disk.qcow2
	I0327 13:53:33.839362   12896 main.go:141] libmachine: STDOUT: 
	I0327 13:53:33.839428   12896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:53:33.839543   12896 fix.go:56] duration metric: took 24.026375ms for fixHost
	I0327 13:53:33.839567   12896 start.go:83] releasing machines lock for "ha-746000", held for 24.123666ms
	W0327 13:53:33.839763   12896 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:53:33.848696   12896 out.go:177] 
	W0327 13:53:33.852659   12896 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:53:33.852678   12896 out.go:239] * 
	* 
	W0327 13:53:33.854651   12896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:53:33.862666   12896 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-746000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (70.975042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-746000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.918625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-746000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-746000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.566334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:53:34.088410   12916 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:53:34.088555   12916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:34.088558   12916 out.go:304] Setting ErrFile to fd 2...
	I0327 13:53:34.088561   12916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:53:34.088694   12916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:53:34.088917   12916 mustload.go:65] Loading cluster: ha-746000
	I0327 13:53:34.089112   12916 config.go:182] Loaded profile config "ha-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:53:34.093017   12916 out.go:177] * The control-plane node ha-746000 host is not running: state=Stopped
	I0327 13:53:34.096958   12916 out.go:177]   To start a cluster, run: "minikube start -p ha-746000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-746000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.977083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-746000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-746000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-746000 -n ha-746000: exit status 7 (31.478167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-037000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-037000 --driver=qemu2 : exit status 80 (9.870986125s)

                                                
                                                
-- stdout --
	* [image-037000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-037000" primary control-plane node in "image-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-037000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-037000 -n image-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-037000 -n image-037000: exit status 7 (69.529875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-160000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-160000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.820611917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8c401bd4-69b8-4502-b686-477f8d40887c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-160000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35b17c73-9699-4d4d-875b-3c417808b387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18158"}}
	{"specversion":"1.0","id":"1dfb0bf8-6d75-466e-9b99-1a4a9c177261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig"}}
	{"specversion":"1.0","id":"087c98ef-3971-422f-972b-b64b253f6ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2df172e7-e2c2-41e7-a0c2-9c265e989f2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5dbeaa31-1809-463c-9b55-09ab25ef0d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube"}}
	{"specversion":"1.0","id":"8a7bab9b-cd28-48a4-bb79-cdc9c1215410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b5a38ff2-07d5-48d0-862b-a48f389e06c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"53000e3c-551e-4564-844b-4214ef74d035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6ed77bcd-68da-43b3-965d-98bb37f3ff1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-160000\" primary control-plane node in \"json-output-160000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a106ff0c-4f56-4798-9bc8-66248915d2c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"94cda351-f903-4bf0-91b2-d2bf93f8ed05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-160000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0f6cf62-1512-4a63-89f6-2e8c5e995a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"005c3cdd-7d6a-41ad-ad26-4244bdfba511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c55faa76-ab56-464a-9d76-c7be2ac9dd6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-160000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"99a51a91-9352-4fef-88f0-a1f34ee90ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"344ed3e3-c21e-4595-9776-8c97eb8b05b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-160000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-160000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-160000 --output=json --user=testUser: exit status 83 (79.405458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b9f89395-e0c0-4952-88e0-9001f5e7ad8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-160000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"e97044f1-5693-4ca2-8691-3d64aab7492e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-160000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-160000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-160000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-160000 --output=json --user=testUser: exit status 83 (47.200541ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-160000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-160000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-160000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-160000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-366000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-366000 --driver=qemu2 : exit status 80 (9.832920291s)

                                                
                                                
-- stdout --
	* [first-366000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-366000" primary control-plane node in "first-366000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-366000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-366000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 13:54:07.954805 -0700 PDT m=+523.456329084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-367000 -n second-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-367000 -n second-367000: exit status 85 (80.840834ms)

                                                
                                                
-- stdout --
	* Profile "second-367000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-367000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-367000" host is not running, skipping log retrieval (state="* Profile \"second-367000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-367000\"")
helpers_test.go:175: Cleaning up "second-367000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-367000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 13:54:08.270888 -0700 PDT m=+523.772415626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-366000 -n first-366000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-366000 -n first-366000: exit status 7 (32.114833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-366000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-366000
--- FAIL: TestMinikubeProfile (10.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-907000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-907000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.544052708s)

                                                
                                                
-- stdout --
	* [mount-start-1-907000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-907000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-907000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-907000 -n mount-start-1-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-907000 -n mount-start-1-907000: exit status 7 (70.76625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-294000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-294000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.877700541s)

                                                
                                                
-- stdout --
	* [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:54:19.383355   13096 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:54:19.383514   13096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:54:19.383522   13096 out.go:304] Setting ErrFile to fd 2...
	I0327 13:54:19.383526   13096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:54:19.383791   13096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:54:19.385081   13096 out.go:298] Setting JSON to false
	I0327 13:54:19.401373   13096 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6829,"bootTime":1711566030,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:54:19.401445   13096 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:54:19.407547   13096 out.go:177] * [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:54:19.414436   13096 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:54:19.418467   13096 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:54:19.414448   13096 notify.go:220] Checking for updates...
	I0327 13:54:19.424448   13096 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:54:19.427494   13096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:54:19.430404   13096 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:54:19.433456   13096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:54:19.436686   13096 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:54:19.441409   13096 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:54:19.448451   13096 start.go:297] selected driver: qemu2
	I0327 13:54:19.448457   13096 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:54:19.448465   13096 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:54:19.450748   13096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:54:19.454358   13096 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:54:19.457578   13096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:54:19.457628   13096 cni.go:84] Creating CNI manager for ""
	I0327 13:54:19.457634   13096 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 13:54:19.457641   13096 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 13:54:19.457670   13096 start.go:340] cluster config:
	{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socket
VMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:54:19.462268   13096 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:54:19.469424   13096 out.go:177] * Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	I0327 13:54:19.473393   13096 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:54:19.473407   13096 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:54:19.473416   13096 cache.go:56] Caching tarball of preloaded images
	I0327 13:54:19.473477   13096 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:54:19.473483   13096 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:54:19.473724   13096 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/multinode-294000/config.json ...
	I0327 13:54:19.473736   13096 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/multinode-294000/config.json: {Name:mkfab9b147a61ac2c63d05c214fbff1ec92dfd9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:54:19.473971   13096 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:54:19.474003   13096 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "multinode-294000"
	I0327 13:54:19.474017   13096 start.go:93] Provisioning new machine with config: &{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:mul
tinode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:54:19.474047   13096 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:54:19.482465   13096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:54:19.500830   13096 start.go:159] libmachine.API.Create for "multinode-294000" (driver="qemu2")
	I0327 13:54:19.500858   13096 client.go:168] LocalClient.Create starting
	I0327 13:54:19.500925   13096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:54:19.500957   13096 main.go:141] libmachine: Decoding PEM data...
	I0327 13:54:19.500967   13096 main.go:141] libmachine: Parsing certificate...
	I0327 13:54:19.501020   13096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:54:19.501044   13096 main.go:141] libmachine: Decoding PEM data...
	I0327 13:54:19.501050   13096 main.go:141] libmachine: Parsing certificate...
	I0327 13:54:19.501455   13096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:54:19.640696   13096 main.go:141] libmachine: Creating SSH key...
	I0327 13:54:19.823444   13096 main.go:141] libmachine: Creating Disk image...
	I0327 13:54:19.823455   13096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:54:19.823645   13096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:19.836236   13096 main.go:141] libmachine: STDOUT: 
	I0327 13:54:19.836265   13096 main.go:141] libmachine: STDERR: 
	I0327 13:54:19.836336   13096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2 +20000M
	I0327 13:54:19.847061   13096 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:54:19.847086   13096 main.go:141] libmachine: STDERR: 
	I0327 13:54:19.847100   13096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:19.847105   13096 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:54:19.847134   13096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:69:19:e5:ce:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:19.848808   13096 main.go:141] libmachine: STDOUT: 
	I0327 13:54:19.848825   13096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:54:19.848845   13096 client.go:171] duration metric: took 347.985375ms to LocalClient.Create
	I0327 13:54:21.849341   13096 start.go:128] duration metric: took 2.375301458s to createHost
	I0327 13:54:21.849434   13096 start.go:83] releasing machines lock for "multinode-294000", held for 2.375446625s
	W0327 13:54:21.849486   13096 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:54:21.865490   13096 out.go:177] * Deleting "multinode-294000" in qemu2 ...
	W0327 13:54:21.893055   13096 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:54:21.893093   13096 start.go:728] Will try again in 5 seconds ...
	I0327 13:54:26.893416   13096 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:54:26.893878   13096 start.go:364] duration metric: took 344.625µs to acquireMachinesLock for "multinode-294000"
	I0327 13:54:26.893989   13096 start.go:93] Provisioning new machine with config: &{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:mul
tinode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:54:26.894380   13096 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:54:26.905960   13096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:54:26.955457   13096 start.go:159] libmachine.API.Create for "multinode-294000" (driver="qemu2")
	I0327 13:54:26.955501   13096 client.go:168] LocalClient.Create starting
	I0327 13:54:26.955610   13096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:54:26.955681   13096 main.go:141] libmachine: Decoding PEM data...
	I0327 13:54:26.955695   13096 main.go:141] libmachine: Parsing certificate...
	I0327 13:54:26.955753   13096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:54:26.955794   13096 main.go:141] libmachine: Decoding PEM data...
	I0327 13:54:26.955808   13096 main.go:141] libmachine: Parsing certificate...
	I0327 13:54:26.956304   13096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:54:27.107220   13096 main.go:141] libmachine: Creating SSH key...
	I0327 13:54:27.159793   13096 main.go:141] libmachine: Creating Disk image...
	I0327 13:54:27.159799   13096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:54:27.159966   13096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:27.172343   13096 main.go:141] libmachine: STDOUT: 
	I0327 13:54:27.172361   13096 main.go:141] libmachine: STDERR: 
	I0327 13:54:27.172410   13096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2 +20000M
	I0327 13:54:27.183117   13096 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:54:27.183131   13096 main.go:141] libmachine: STDERR: 
	I0327 13:54:27.183150   13096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:27.183160   13096 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:54:27.183198   13096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c8:bc:fc:e9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:54:27.184887   13096 main.go:141] libmachine: STDOUT: 
	I0327 13:54:27.184905   13096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:54:27.184917   13096 client.go:171] duration metric: took 229.412833ms to LocalClient.Create
	I0327 13:54:29.187116   13096 start.go:128] duration metric: took 2.292697042s to createHost
	I0327 13:54:29.187225   13096 start.go:83] releasing machines lock for "multinode-294000", held for 2.293350208s
	W0327 13:54:29.187604   13096 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:54:29.197303   13096 out.go:177] 
	W0327 13:54:29.204305   13096 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:54:29.204333   13096 out.go:239] * 
	* 
	W0327 13:54:29.206873   13096 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:54:29.215261   13096 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-294000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (69.650584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.518125ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-294000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- rollout status deployment/busybox: exit status 1 (59.949792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.374125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.64175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.408458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.869166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.669ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.071458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.101166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.137167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.763208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.797791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.675125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.061375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.192834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.826291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.030416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.653375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-294000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.374ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.620958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-294000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-294000 -v 3 --alsologtostderr: exit status 83 (44.490291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-294000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:15.606315   13220 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:15.606456   13220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.606459   13220 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:15.606462   13220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.606584   13220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:15.606830   13220 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:15.607002   13220 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:15.611381   13220 out.go:177] * The control-plane node multinode-294000 host is not running: state=Stopped
	I0327 13:56:15.614180   13220 out.go:177]   To start a cluster, run: "minikube start -p multinode-294000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-294000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (31.690958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-294000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-294000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.452958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-294000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-294000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-294000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.383916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-294000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-294000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-294000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"Doc
kerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-294000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":
\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\
":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (31.988958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status --output json --alsologtostderr: exit status 7 (32.344417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-294000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:15.845586   13233 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:15.845704   13233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.845708   13233 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:15.845710   13233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.845833   13233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:15.845950   13233 out.go:298] Setting JSON to true
	I0327 13:56:15.845962   13233 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:15.846025   13233 notify.go:220] Checking for updates...
	I0327 13:56:15.846177   13233 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:15.846182   13233 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:15.846371   13233 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:15.846375   13233 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:15.846377   13233 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-294000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (31.878417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 node stop m03: exit status 85 (51.788042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-294000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status: exit status 7 (32.413292ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr: exit status 7 (32.484333ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:15.994863   13241 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:15.995037   13241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.995041   13241 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:15.995043   13241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:15.995174   13241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:15.995302   13241 out.go:298] Setting JSON to false
	I0327 13:56:15.995313   13241 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:15.995359   13241 notify.go:220] Checking for updates...
	I0327 13:56:15.995503   13241 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:15.995509   13241 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:15.995710   13241 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:15.995715   13241 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:15.995717   13241 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr": multinode-294000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.608125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 node start m03 -v=7 --alsologtostderr: exit status 85 (52.99ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:16.060314   13245 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:16.060673   13245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:16.060676   13245 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:16.060679   13245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:16.060837   13245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:16.061055   13245 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:16.061248   13245 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:16.064797   13245 out.go:177] 
	W0327 13:56:16.072618   13245 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0327 13:56:16.072622   13245 out.go:239] * 
	* 
	W0327 13:56:16.074685   13245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:56:16.078650   13245 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0327 13:56:16.060314   13245 out.go:291] Setting OutFile to fd 1 ...
I0327 13:56:16.060673   13245 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:56:16.060676   13245 out.go:304] Setting ErrFile to fd 2...
I0327 13:56:16.060679   13245 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 13:56:16.060837   13245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
I0327 13:56:16.061055   13245 mustload.go:65] Loading cluster: multinode-294000
I0327 13:56:16.061248   13245 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 13:56:16.064797   13245 out.go:177] 
W0327 13:56:16.072618   13245 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0327 13:56:16.072622   13245 out.go:239] * 
* 
W0327 13:56:16.074685   13245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 13:56:16.078650   13245 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-294000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (32.532209ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:16.113485   13247 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:16.113630   13247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:16.113633   13247 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:16.113635   13247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:16.113759   13247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:16.113882   13247 out.go:298] Setting JSON to false
	I0327 13:56:16.113895   13247 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:16.113944   13247 notify.go:220] Checking for updates...
	I0327 13:56:16.114107   13247 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:16.114113   13247 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:16.114321   13247 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:16.114325   13247 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:16.114327   13247 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (76.023666ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:17.594896   13249 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:17.595049   13249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:17.595054   13249 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:17.595057   13249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:17.595221   13249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:17.595362   13249 out.go:298] Setting JSON to false
	I0327 13:56:17.595378   13249 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:17.595418   13249 notify.go:220] Checking for updates...
	I0327 13:56:17.595631   13249 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:17.595639   13249 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:17.595888   13249 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:17.595893   13249 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:17.595895   13249 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (76.248042ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:18.575766   13251 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:18.575947   13251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:18.575951   13251 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:18.575954   13251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:18.576114   13251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:18.576273   13251 out.go:298] Setting JSON to false
	I0327 13:56:18.576291   13251 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:18.576330   13251 notify.go:220] Checking for updates...
	I0327 13:56:18.576538   13251 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:18.576546   13251 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:18.576810   13251 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:18.576815   13251 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:18.576817   13251 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (77.308542ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:20.466606   13253 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:20.466787   13253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:20.466791   13253 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:20.466794   13253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:20.466947   13253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:20.467128   13253 out.go:298] Setting JSON to false
	I0327 13:56:20.467141   13253 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:20.467177   13253 notify.go:220] Checking for updates...
	I0327 13:56:20.467378   13253 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:20.467385   13253 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:20.467651   13253 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:20.467655   13253 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:20.467660   13253 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (80.308667ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:24.797748   13255 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:24.797906   13255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:24.797911   13255 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:24.797914   13255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:24.798081   13255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:24.798233   13255 out.go:298] Setting JSON to false
	I0327 13:56:24.798248   13255 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:24.798290   13255 notify.go:220] Checking for updates...
	I0327 13:56:24.798508   13255 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:24.798515   13255 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:24.798781   13255 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:24.798786   13255 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:24.798788   13255 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (74.808125ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:32.444641   13260 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:32.444847   13260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:32.444851   13260 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:32.444854   13260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:32.445018   13260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:32.445178   13260 out.go:298] Setting JSON to false
	I0327 13:56:32.445191   13260 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:32.445232   13260 notify.go:220] Checking for updates...
	I0327 13:56:32.445446   13260 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:32.445454   13260 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:32.445725   13260 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:32.445731   13260 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:32.445734   13260 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (77.583125ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:39.962432   13269 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:39.962583   13269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:39.962587   13269 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:39.962590   13269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:39.962750   13269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:39.962906   13269 out.go:298] Setting JSON to false
	I0327 13:56:39.962920   13269 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:39.962961   13269 notify.go:220] Checking for updates...
	I0327 13:56:39.963157   13269 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:39.963165   13269 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:39.963431   13269 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:39.963436   13269 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:39.963439   13269 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (74.933875ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:56:49.560330   13275 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:56:49.560519   13275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:49.560523   13275 out.go:304] Setting ErrFile to fd 2...
	I0327 13:56:49.560526   13275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:56:49.560696   13275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:56:49.560886   13275 out.go:298] Setting JSON to false
	I0327 13:56:49.560902   13275 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:56:49.560931   13275 notify.go:220] Checking for updates...
	I0327 13:56:49.561164   13275 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:56:49.561171   13275 status.go:255] checking status of multinode-294000 ...
	I0327 13:56:49.561484   13275 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:56:49.561489   13275 status.go:343] host is not running, skipping remaining checks
	I0327 13:56:49.561492   13275 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr: exit status 7 (76.254542ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:01.953509   13282 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:01.953685   13282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:01.953689   13282 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:01.953692   13282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:01.953873   13282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:01.954037   13282 out.go:298] Setting JSON to false
	I0327 13:57:01.954051   13282 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:57:01.954089   13282 notify.go:220] Checking for updates...
	I0327 13:57:01.954317   13282 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:01.954328   13282 status.go:255] checking status of multinode-294000 ...
	I0327 13:57:01.954604   13282 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:57:01.954609   13282 status.go:343] host is not running, skipping remaining checks
	I0327 13:57:01.954612   13282 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-294000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (34.702584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-294000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-294000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-294000: (2.076634458s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-294000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-294000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229361458s)

                                                
                                                
-- stdout --
	* [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	* Restarting existing qemu2 VM for "multinode-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:04.164205   13300 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:04.164359   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:04.164363   13300 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:04.164366   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:04.164523   13300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:04.165639   13300 out.go:298] Setting JSON to false
	I0327 13:57:04.184188   13300 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6994,"bootTime":1711566030,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:57:04.184247   13300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:57:04.188673   13300 out.go:177] * [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:57:04.199585   13300 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:57:04.195560   13300 notify.go:220] Checking for updates...
	I0327 13:57:04.204119   13300 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:57:04.207591   13300 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:57:04.210630   13300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:57:04.217607   13300 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:57:04.220631   13300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:57:04.223946   13300 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:04.224004   13300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:57:04.228503   13300 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:57:04.235645   13300 start.go:297] selected driver: qemu2
	I0327 13:57:04.235651   13300 start.go:901] validating driver "qemu2" against &{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multin
ode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:57:04.235712   13300 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:57:04.237950   13300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:57:04.238011   13300 cni.go:84] Creating CNI manager for ""
	I0327 13:57:04.238017   13300 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 13:57:04.238075   13300 start.go:340] cluster config:
	{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:57:04.242757   13300 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:04.249623   13300 out.go:177] * Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	I0327 13:57:04.252565   13300 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:57:04.252578   13300 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:57:04.252588   13300 cache.go:56] Caching tarball of preloaded images
	I0327 13:57:04.252642   13300 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:57:04.252648   13300 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:57:04.252725   13300 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/multinode-294000/config.json ...
	I0327 13:57:04.253186   13300 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:04.253223   13300 start.go:364] duration metric: took 25.834µs to acquireMachinesLock for "multinode-294000"
	I0327 13:57:04.253233   13300 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:57:04.253239   13300 fix.go:54] fixHost starting: 
	I0327 13:57:04.253365   13300 fix.go:112] recreateIfNeeded on multinode-294000: state=Stopped err=<nil>
	W0327 13:57:04.253374   13300 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:57:04.261389   13300 out.go:177] * Restarting existing qemu2 VM for "multinode-294000" ...
	I0327 13:57:04.265627   13300 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c8:bc:fc:e9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:57:04.267729   13300 main.go:141] libmachine: STDOUT: 
	I0327 13:57:04.267752   13300 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:04.267780   13300 fix.go:56] duration metric: took 14.542167ms for fixHost
	I0327 13:57:04.267785   13300 start.go:83] releasing machines lock for "multinode-294000", held for 14.556917ms
	W0327 13:57:04.267792   13300 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:57:04.267820   13300 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:04.267825   13300 start.go:728] Will try again in 5 seconds ...
	I0327 13:57:09.269978   13300 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:09.270333   13300 start.go:364] duration metric: took 277.167µs to acquireMachinesLock for "multinode-294000"
	I0327 13:57:09.270483   13300 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:57:09.270503   13300 fix.go:54] fixHost starting: 
	I0327 13:57:09.271171   13300 fix.go:112] recreateIfNeeded on multinode-294000: state=Stopped err=<nil>
	W0327 13:57:09.271199   13300 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:57:09.276355   13300 out.go:177] * Restarting existing qemu2 VM for "multinode-294000" ...
	I0327 13:57:09.284745   13300 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c8:bc:fc:e9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:57:09.294260   13300 main.go:141] libmachine: STDOUT: 
	I0327 13:57:09.294337   13300 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:09.294422   13300 fix.go:56] duration metric: took 23.915ms for fixHost
	I0327 13:57:09.294440   13300 start.go:83] releasing machines lock for "multinode-294000", held for 24.062333ms
	W0327 13:57:09.294675   13300 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:09.301543   13300 out.go:177] 
	W0327 13:57:09.305570   13300 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:57:09.305696   13300 out.go:239] * 
	* 
	W0327 13:57:09.308401   13300 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:57:09.314415   13300 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-294000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-294000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (34.545166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 node delete m03: exit status 83 (46.875458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-294000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-294000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr: exit status 7 (32.62775ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:09.513848   13314 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:09.514019   13314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:09.514023   13314 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:09.514025   13314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:09.514156   13314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:09.514276   13314 out.go:298] Setting JSON to false
	I0327 13:57:09.514290   13314 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:57:09.514355   13314 notify.go:220] Checking for updates...
	I0327 13:57:09.514498   13314 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:09.514504   13314 status.go:255] checking status of multinode-294000 ...
	I0327 13:57:09.514707   13314 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:57:09.514711   13314 status.go:343] host is not running, skipping remaining checks
	I0327 13:57:09.514714   13314 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.288542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-294000 stop: (3.283345917s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status: exit status 7 (69.880125ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr: exit status 7 (34.12525ms)

                                                
                                                
-- stdout --
	multinode-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:12.934133   13338 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:12.934264   13338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:12.934267   13338 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:12.934269   13338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:12.934386   13338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:12.934517   13338 out.go:298] Setting JSON to false
	I0327 13:57:12.934528   13338 mustload.go:65] Loading cluster: multinode-294000
	I0327 13:57:12.934593   13338 notify.go:220] Checking for updates...
	I0327 13:57:12.934741   13338 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:12.934747   13338 status.go:255] checking status of multinode-294000 ...
	I0327 13:57:12.934956   13338 status.go:330] multinode-294000 host status = "Stopped" (err=<nil>)
	I0327 13:57:12.934960   13338 status.go:343] host is not running, skipping remaining checks
	I0327 13:57:12.934962   13338 status.go:257] multinode-294000 status: &{Name:multinode-294000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr": multinode-294000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-294000 status --alsologtostderr": multinode-294000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.219708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-294000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-294000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.191726583s)

                                                
                                                
-- stdout --
	* [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	* Restarting existing qemu2 VM for "multinode-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:12.997965   13342 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:12.998086   13342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:12.998090   13342 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:12.998092   13342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:12.998223   13342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:12.999250   13342 out.go:298] Setting JSON to false
	I0327 13:57:13.015381   13342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7003,"bootTime":1711566030,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:57:13.015441   13342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:57:13.019653   13342 out.go:177] * [multinode-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:57:13.026709   13342 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:57:13.030589   13342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:57:13.026757   13342 notify.go:220] Checking for updates...
	I0327 13:57:13.036629   13342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:57:13.039508   13342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:57:13.046657   13342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:57:13.048206   13342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:57:13.051884   13342 config.go:182] Loaded profile config "multinode-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:13.052137   13342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:57:13.056572   13342 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:57:13.062621   13342 start.go:297] selected driver: qemu2
	I0327 13:57:13.062626   13342 start.go:901] validating driver "qemu2" against &{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multin
ode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:57:13.062701   13342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:57:13.064900   13342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:57:13.064951   13342 cni.go:84] Creating CNI manager for ""
	I0327 13:57:13.064957   13342 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 13:57:13.064995   13342 start.go:340] cluster config:
	{Name:multinode-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:57:13.069246   13342 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:13.076607   13342 out.go:177] * Starting "multinode-294000" primary control-plane node in "multinode-294000" cluster
	I0327 13:57:13.080632   13342 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:57:13.080646   13342 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:57:13.080656   13342 cache.go:56] Caching tarball of preloaded images
	I0327 13:57:13.080705   13342 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 13:57:13.080710   13342 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:57:13.080781   13342 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/multinode-294000/config.json ...
	I0327 13:57:13.081258   13342 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:13.081285   13342 start.go:364] duration metric: took 20.458µs to acquireMachinesLock for "multinode-294000"
	I0327 13:57:13.081294   13342 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:57:13.081300   13342 fix.go:54] fixHost starting: 
	I0327 13:57:13.081416   13342 fix.go:112] recreateIfNeeded on multinode-294000: state=Stopped err=<nil>
	W0327 13:57:13.081426   13342 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:57:13.089619   13342 out.go:177] * Restarting existing qemu2 VM for "multinode-294000" ...
	I0327 13:57:13.093586   13342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c8:bc:fc:e9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:57:13.095572   13342 main.go:141] libmachine: STDOUT: 
	I0327 13:57:13.095596   13342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:13.095627   13342 fix.go:56] duration metric: took 14.326708ms for fixHost
	I0327 13:57:13.095633   13342 start.go:83] releasing machines lock for "multinode-294000", held for 14.344208ms
	W0327 13:57:13.095638   13342 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:57:13.095668   13342 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:13.095673   13342 start.go:728] Will try again in 5 seconds ...
	I0327 13:57:18.097859   13342 start.go:360] acquireMachinesLock for multinode-294000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:18.098220   13342 start.go:364] duration metric: took 258.584µs to acquireMachinesLock for "multinode-294000"
	I0327 13:57:18.098372   13342 start.go:96] Skipping create...Using existing machine configuration
	I0327 13:57:18.098395   13342 fix.go:54] fixHost starting: 
	I0327 13:57:18.099155   13342 fix.go:112] recreateIfNeeded on multinode-294000: state=Stopped err=<nil>
	W0327 13:57:18.099183   13342 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 13:57:18.108460   13342 out.go:177] * Restarting existing qemu2 VM for "multinode-294000" ...
	I0327 13:57:18.112793   13342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c8:bc:fc:e9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/multinode-294000/disk.qcow2
	I0327 13:57:18.122770   13342 main.go:141] libmachine: STDOUT: 
	I0327 13:57:18.122826   13342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:18.122921   13342 fix.go:56] duration metric: took 24.526667ms for fixHost
	I0327 13:57:18.122940   13342 start.go:83] releasing machines lock for "multinode-294000", held for 24.695375ms
	W0327 13:57:18.123153   13342 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:18.130593   13342 out.go:177] 
	W0327 13:57:18.134727   13342 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:57:18.134776   13342 out.go:239] * 
	* 
	W0327 13:57:18.138436   13342 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:57:18.145594   13342 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-294000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (70.577459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-294000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-294000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-294000-m01 --driver=qemu2 : exit status 80 (9.887435084s)

                                                
                                                
-- stdout --
	* [multinode-294000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-294000-m01" primary control-plane node in "multinode-294000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-294000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-294000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-294000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-294000-m02 --driver=qemu2 : exit status 80 (10.256747s)

                                                
                                                
-- stdout --
	* [multinode-294000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-294000-m02" primary control-plane node in "multinode-294000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-294000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-294000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-294000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-294000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-294000: exit status 83 (84.08775ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-294000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-294000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-294000 -n multinode-294000: exit status 7 (32.325625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                    
x
+
TestPreload (10.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-563000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-563000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.011494125s)

                                                
                                                
-- stdout --
	* [test-preload-563000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-563000" primary control-plane node in "test-preload-563000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-563000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:57:38.818627   13406 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:57:38.818755   13406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:38.818758   13406 out.go:304] Setting ErrFile to fd 2...
	I0327 13:57:38.818760   13406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:57:38.818884   13406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:57:38.820051   13406 out.go:298] Setting JSON to false
	I0327 13:57:38.836199   13406 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7028,"bootTime":1711566030,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:57:38.836256   13406 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:57:38.841464   13406 out.go:177] * [test-preload-563000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:57:38.855215   13406 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:57:38.849486   13406 notify.go:220] Checking for updates...
	I0327 13:57:38.863343   13406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:57:38.866422   13406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:57:38.874341   13406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:57:38.877346   13406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:57:38.880405   13406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:57:38.883715   13406 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:57:38.883771   13406 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:57:38.888330   13406 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 13:57:38.895402   13406 start.go:297] selected driver: qemu2
	I0327 13:57:38.895410   13406 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:57:38.895417   13406 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:57:38.897860   13406 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:57:38.901364   13406 out.go:177] * Automatically selected the socket_vmnet network
	I0327 13:57:38.904384   13406 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 13:57:38.904430   13406 cni.go:84] Creating CNI manager for ""
	I0327 13:57:38.904437   13406 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:57:38.904447   13406 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:57:38.904481   13406 start.go:340] cluster config:
	{Name:test-preload-563000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client
SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:57:38.909386   13406 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.915353   13406 out.go:177] * Starting "test-preload-563000" primary control-plane node in "test-preload-563000" cluster
	I0327 13:57:38.919295   13406 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0327 13:57:38.919370   13406 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/test-preload-563000/config.json ...
	I0327 13:57:38.919386   13406 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/test-preload-563000/config.json: {Name:mk4bed7e068a6ec33ed5241d927134f1c4f502a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:57:38.919420   13406 cache.go:107] acquiring lock: {Name:mk95ee8b8889c41cfcc444f65a848f051b38686b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919459   13406 cache.go:107] acquiring lock: {Name:mkf105a872ffd6f00edd57af48b05b2d042fa5bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919463   13406 cache.go:107] acquiring lock: {Name:mkc949fd4f1858deff18e4a76b820925495c503d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919608   13406 cache.go:107] acquiring lock: {Name:mk9df7c095face451c3a5a8c1598a2780ea24dc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919614   13406 cache.go:107] acquiring lock: {Name:mk4a6210d76201023a510414fbe123dc161de5c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919699   13406 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 13:57:38.919704   13406 cache.go:107] acquiring lock: {Name:mk59402c5c75e4c1593b8164fe6306f170bdc30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919717   13406 start.go:360] acquireMachinesLock for test-preload-563000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:38.919731   13406 cache.go:107] acquiring lock: {Name:mkdf73143fcab56220933ec85fb9bf60b8307130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919790   13406 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 13:57:38.919794   13406 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 13:57:38.919787   13406 cache.go:107] acquiring lock: {Name:mk8da1c6b6e76bb4da4788c16e98b2ef6bf1b097 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:57:38.919831   13406 start.go:364] duration metric: took 102.167µs to acquireMachinesLock for "test-preload-563000"
	I0327 13:57:38.919870   13406 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 13:57:38.919966   13406 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 13:57:38.919906   13406 start.go:93] Provisioning new machine with config: &{Name:test-preload-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:
test-preload-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:57:38.919976   13406 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:57:38.924390   13406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:57:38.920045   13406 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 13:57:38.920062   13406 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 13:57:38.920095   13406 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 13:57:38.933720   13406 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 13:57:38.934500   13406 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 13:57:38.934618   13406 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 13:57:38.934641   13406 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 13:57:38.934670   13406 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 13:57:38.943064   13406 start.go:159] libmachine.API.Create for "test-preload-563000" (driver="qemu2")
	I0327 13:57:38.943101   13406 client.go:168] LocalClient.Create starting
	I0327 13:57:38.943171   13406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:57:38.943201   13406 main.go:141] libmachine: Decoding PEM data...
	I0327 13:57:38.943211   13406 main.go:141] libmachine: Parsing certificate...
	I0327 13:57:38.943255   13406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:57:38.943278   13406 main.go:141] libmachine: Decoding PEM data...
	I0327 13:57:38.943284   13406 main.go:141] libmachine: Parsing certificate...
	I0327 13:57:38.943658   13406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:57:38.945552   13406 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 13:57:38.945611   13406 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 13:57:38.947583   13406 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 13:57:39.126035   13406 main.go:141] libmachine: Creating SSH key...
	I0327 13:57:39.291677   13406 main.go:141] libmachine: Creating Disk image...
	I0327 13:57:39.291695   13406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:57:39.291870   13406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:39.304032   13406 main.go:141] libmachine: STDOUT: 
	I0327 13:57:39.304055   13406 main.go:141] libmachine: STDERR: 
	I0327 13:57:39.304110   13406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2 +20000M
	I0327 13:57:39.315210   13406 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:57:39.315228   13406 main.go:141] libmachine: STDERR: 
	I0327 13:57:39.315244   13406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:39.315248   13406 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:57:39.315272   13406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8c:e2:2d:b6:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:39.317155   13406 main.go:141] libmachine: STDOUT: 
	I0327 13:57:39.317171   13406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:39.317187   13406 client.go:171] duration metric: took 374.0865ms to LocalClient.Create
	I0327 13:57:40.583531   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0327 13:57:40.597253   13406 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 13:57:40.597344   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 13:57:40.607435   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 13:57:40.610456   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0327 13:57:40.761634   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0327 13:57:40.761718   13406 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.84200575s
	I0327 13:57:40.761756   13406 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0327 13:57:40.847315   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 13:57:41.017242   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0327 13:57:41.019483   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0327 13:57:41.317476   13406 start.go:128] duration metric: took 2.397503333s to createHost
	I0327 13:57:41.317526   13406 start.go:83] releasing machines lock for "test-preload-563000", held for 2.397710417s
	W0327 13:57:41.317603   13406 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:41.334863   13406 out.go:177] * Deleting "test-preload-563000" in qemu2 ...
	W0327 13:57:41.365826   13406 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:41.365857   13406 start.go:728] Will try again in 5 seconds ...
	W0327 13:57:41.525934   13406 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 13:57:41.526031   13406 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 13:57:42.173497   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0327 13:57:42.173574   13406 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.254027708s
	I0327 13:57:42.173606   13406 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0327 13:57:43.348229   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 13:57:43.348298   13406 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.428932916s
	I0327 13:57:43.348327   13406 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 13:57:43.843116   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0327 13:57:43.843176   13406 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.923544125s
	I0327 13:57:43.843202   13406 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0327 13:57:44.822993   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0327 13:57:44.823039   13406 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.903697333s
	I0327 13:57:44.823070   13406 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0327 13:57:45.128636   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0327 13:57:45.128687   13406 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.209316167s
	I0327 13:57:45.128740   13406 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0327 13:57:45.703933   13406 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0327 13:57:45.704000   13406 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.78437825s
	I0327 13:57:45.704029   13406 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0327 13:57:46.366004   13406 start.go:360] acquireMachinesLock for test-preload-563000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 13:57:46.366399   13406 start.go:364] duration metric: took 316.292µs to acquireMachinesLock for "test-preload-563000"
	I0327 13:57:46.366529   13406 start.go:93] Provisioning new machine with config: &{Name:test-preload-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:
test-preload-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 13:57:46.366771   13406 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 13:57:46.376833   13406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 13:57:46.425187   13406 start.go:159] libmachine.API.Create for "test-preload-563000" (driver="qemu2")
	I0327 13:57:46.425252   13406 client.go:168] LocalClient.Create starting
	I0327 13:57:46.425363   13406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 13:57:46.425421   13406 main.go:141] libmachine: Decoding PEM data...
	I0327 13:57:46.425440   13406 main.go:141] libmachine: Parsing certificate...
	I0327 13:57:46.425501   13406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 13:57:46.425542   13406 main.go:141] libmachine: Decoding PEM data...
	I0327 13:57:46.425553   13406 main.go:141] libmachine: Parsing certificate...
	I0327 13:57:46.426043   13406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 13:57:46.588602   13406 main.go:141] libmachine: Creating SSH key...
	I0327 13:57:46.728670   13406 main.go:141] libmachine: Creating Disk image...
	I0327 13:57:46.728676   13406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 13:57:46.728853   13406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:46.741564   13406 main.go:141] libmachine: STDOUT: 
	I0327 13:57:46.741584   13406 main.go:141] libmachine: STDERR: 
	I0327 13:57:46.741655   13406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2 +20000M
	I0327 13:57:46.752661   13406 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 13:57:46.752679   13406 main.go:141] libmachine: STDERR: 
	I0327 13:57:46.752691   13406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:46.752696   13406 main.go:141] libmachine: Starting QEMU VM...
	I0327 13:57:46.752736   13406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ec:9f:4f:2d:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/test-preload-563000/disk.qcow2
	I0327 13:57:46.754644   13406 main.go:141] libmachine: STDOUT: 
	I0327 13:57:46.754660   13406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 13:57:46.754673   13406 client.go:171] duration metric: took 329.418125ms to LocalClient.Create
	I0327 13:57:48.754873   13406 start.go:128] duration metric: took 2.388068042s to createHost
	I0327 13:57:48.754932   13406 start.go:83] releasing machines lock for "test-preload-563000", held for 2.388540458s
	W0327 13:57:48.755417   13406 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 13:57:48.763109   13406 out.go:177] 
	W0327 13:57:48.770197   13406 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 13:57:48.770246   13406 out.go:239] * 
	* 
	W0327 13:57:48.772990   13406 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:57:48.784090   13406 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-563000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-27 13:57:48.802552 -0700 PDT m=+744.306855918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-563000 -n test-preload-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-563000 -n test-preload-563000: exit status 7 (69.350625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-563000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-563000
--- FAIL: TestPreload (10.19s)

                                                
                                    
x
+
TestScheduledStopUnix (10.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-195000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-195000 --memory=2048 --driver=qemu2 : exit status 80 (10.393489667s)

                                                
                                                
-- stdout --
	* [scheduled-stop-195000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-195000" primary control-plane node in "scheduled-stop-195000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-195000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-195000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-195000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-195000" primary control-plane node in "scheduled-stop-195000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-195000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-195000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-27 13:57:59.371324 -0700 PDT m=+754.875760751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-195000 -n scheduled-stop-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-195000 -n scheduled-stop-195000: exit status 7 (70.44275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-195000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-195000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-195000
--- FAIL: TestScheduledStopUnix (10.57s)

                                                
                                    
x
+
TestSkaffold (16.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2852518948 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2852518948 version: (1.05221825s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-952000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-952000 --memory=2600 --driver=qemu2 : exit status 80 (10.004626667s)

                                                
                                                
-- stdout --
	* [skaffold-952000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-952000" primary control-plane node in "skaffold-952000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-952000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-952000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-952000" primary control-plane node in "skaffold-952000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-952000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-27 13:58:16.337896 -0700 PDT m=+771.842546376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-952000 -n skaffold-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-952000 -n skaffold-952000: exit status 7 (64.11325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-952000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-952000
--- FAIL: TestSkaffold (16.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (635.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1522008047 start -p running-upgrade-823000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1522008047 start -p running-upgrade-823000 --memory=2200 --vm-driver=qemu2 : (1m20.518856291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-823000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-823000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.104844917s)

                                                
                                                
-- stdout --
	* [running-upgrade-823000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-823000" primary control-plane node in "running-upgrade-823000" cluster
	* Updating the running qemu2 "running-upgrade-823000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:00:22.561785   13860 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:00:22.561931   13860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:00:22.561934   13860 out.go:304] Setting ErrFile to fd 2...
	I0327 14:00:22.561937   13860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:00:22.562067   13860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:00:22.563034   13860 out.go:298] Setting JSON to false
	I0327 14:00:22.580223   13860 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7192,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:00:22.580296   13860 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:00:22.585651   13860 out.go:177] * [running-upgrade-823000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:00:22.591637   13860 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:00:22.591681   13860 notify.go:220] Checking for updates...
	I0327 14:00:22.595717   13860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:00:22.598626   13860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:00:22.601578   13860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:00:22.604652   13860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:00:22.607517   13860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:00:22.610844   13860 config.go:182] Loaded profile config "running-upgrade-823000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:00:22.614720   13860 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 14:00:22.617494   13860 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:00:22.620585   13860 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:00:22.626532   13860 start.go:297] selected driver: qemu2
	I0327 14:00:22.626536   13860 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:00:22.626591   13860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:00:22.629182   13860 cni.go:84] Creating CNI manager for ""
	I0327 14:00:22.629202   13860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:00:22.629225   13860 start.go:340] cluster config:
	{Name:running-upgrade-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:00:22.629277   13860 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:00:22.637581   13860 out.go:177] * Starting "running-upgrade-823000" primary control-plane node in "running-upgrade-823000" cluster
	I0327 14:00:22.640598   13860 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:00:22.640614   13860 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 14:00:22.640621   13860 cache.go:56] Caching tarball of preloaded images
	I0327 14:00:22.640673   13860 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:00:22.640678   13860 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 14:00:22.640731   13860 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/config.json ...
	I0327 14:00:22.641092   13860 start.go:360] acquireMachinesLock for running-upgrade-823000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:00:22.641126   13860 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "running-upgrade-823000"
	I0327 14:00:22.641136   13860 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:00:22.641141   13860 fix.go:54] fixHost starting: 
	I0327 14:00:22.641857   13860 fix.go:112] recreateIfNeeded on running-upgrade-823000: state=Running err=<nil>
	W0327 14:00:22.641867   13860 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:00:22.649577   13860 out.go:177] * Updating the running qemu2 "running-upgrade-823000" VM ...
	I0327 14:00:22.652626   13860 machine.go:94] provisionDockerMachine start ...
	I0327 14:00:22.652671   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:22.652782   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:22.652787   13860 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 14:00:22.723620   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-823000
	
	I0327 14:00:22.723635   13860 buildroot.go:166] provisioning hostname "running-upgrade-823000"
	I0327 14:00:22.723681   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:22.723782   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:22.723787   13860 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-823000 && echo "running-upgrade-823000" | sudo tee /etc/hostname
	I0327 14:00:22.797746   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-823000
	
	I0327 14:00:22.797803   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:22.797909   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:22.797919   13860 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-823000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-823000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-823000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 14:00:22.869530   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 14:00:22.869542   13860 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18158-11341/.minikube CaCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18158-11341/.minikube}
	I0327 14:00:22.869550   13860 buildroot.go:174] setting up certificates
	I0327 14:00:22.869555   13860 provision.go:84] configureAuth start
	I0327 14:00:22.869563   13860 provision.go:143] copyHostCerts
	I0327 14:00:22.869637   13860 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem, removing ...
	I0327 14:00:22.869647   13860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem
	I0327 14:00:22.869749   13860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem (1078 bytes)
	I0327 14:00:22.869926   13860 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem, removing ...
	I0327 14:00:22.869930   13860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem
	I0327 14:00:22.869994   13860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem (1123 bytes)
	I0327 14:00:22.870100   13860 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem, removing ...
	I0327 14:00:22.870103   13860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem
	I0327 14:00:22.870136   13860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem (1675 bytes)
	I0327 14:00:22.870241   13860 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-823000 san=[127.0.0.1 localhost minikube running-upgrade-823000]
	I0327 14:00:22.929441   13860 provision.go:177] copyRemoteCerts
	I0327 14:00:22.929480   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 14:00:22.929488   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:00:22.967164   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 14:00:22.973695   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 14:00:22.980180   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 14:00:22.987072   13860 provision.go:87] duration metric: took 117.511292ms to configureAuth
	I0327 14:00:22.987080   13860 buildroot.go:189] setting minikube options for container-runtime
	I0327 14:00:22.987170   13860 config.go:182] Loaded profile config "running-upgrade-823000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:00:22.987203   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:22.987289   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:22.987293   13860 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 14:00:23.057558   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 14:00:23.057565   13860 buildroot.go:70] root file system type: tmpfs
	I0327 14:00:23.057607   13860 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 14:00:23.057657   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:23.057758   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:23.057791   13860 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 14:00:23.131944   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 14:00:23.131992   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:23.132108   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:23.132117   13860 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 14:00:23.204662   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 14:00:23.204673   13860 machine.go:97] duration metric: took 552.048333ms to provisionDockerMachine
	I0327 14:00:23.204678   13860 start.go:293] postStartSetup for "running-upgrade-823000" (driver="qemu2")
	I0327 14:00:23.204684   13860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 14:00:23.204730   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 14:00:23.204739   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:00:23.242575   13860 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 14:00:23.243796   13860 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 14:00:23.243803   13860 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/addons for local assets ...
	I0327 14:00:23.243852   13860 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/files for local assets ...
	I0327 14:00:23.243933   13860 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem -> 117522.pem in /etc/ssl/certs
	I0327 14:00:23.244014   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 14:00:23.247074   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:00:23.253770   13860 start.go:296] duration metric: took 49.088458ms for postStartSetup
	I0327 14:00:23.253781   13860 fix.go:56] duration metric: took 612.649292ms for fixHost
	I0327 14:00:23.253809   13860 main.go:141] libmachine: Using SSH client type: native
	I0327 14:00:23.253893   13860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101315bf0] 0x101318450 <nil>  [] 0s} localhost 52268 <nil> <nil>}
	I0327 14:00:23.253902   13860 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 14:00:23.323791   13860 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711573222.874941682
	
	I0327 14:00:23.323800   13860 fix.go:216] guest clock: 1711573222.874941682
	I0327 14:00:23.323804   13860 fix.go:229] Guest: 2024-03-27 14:00:22.874941682 -0700 PDT Remote: 2024-03-27 14:00:23.253782 -0700 PDT m=+0.714443001 (delta=-378.840318ms)
	I0327 14:00:23.323816   13860 fix.go:200] guest clock delta is within tolerance: -378.840318ms
	I0327 14:00:23.323819   13860 start.go:83] releasing machines lock for "running-upgrade-823000", held for 682.697584ms
	I0327 14:00:23.323875   13860 ssh_runner.go:195] Run: cat /version.json
	I0327 14:00:23.323887   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:00:23.323875   13860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 14:00:23.323925   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	W0327 14:00:23.324550   13860 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52268: connect: connection refused
	I0327 14:00:23.324573   13860 retry.go:31] will retry after 300.43007ms: dial tcp [::1]:52268: connect: connection refused
	W0327 14:00:23.360315   13860 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 14:00:23.360365   13860 ssh_runner.go:195] Run: systemctl --version
	I0327 14:00:23.362061   13860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 14:00:23.363566   13860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 14:00:23.363589   13860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 14:00:23.366738   13860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 14:00:23.370790   13860 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 14:00:23.370796   13860 start.go:494] detecting cgroup driver to use...
	I0327 14:00:23.370917   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:00:23.376319   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 14:00:23.379603   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 14:00:23.383198   13860 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 14:00:23.383224   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 14:00:23.386052   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:00:23.388817   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 14:00:23.391464   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:00:23.394565   13860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 14:00:23.397533   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 14:00:23.400311   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 14:00:23.403074   13860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 14:00:23.406418   13860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 14:00:23.408987   13860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 14:00:23.411638   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:23.500673   13860 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 14:00:23.511769   13860 start.go:494] detecting cgroup driver to use...
	I0327 14:00:23.511853   13860 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 14:00:23.518197   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:00:23.522689   13860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 14:00:23.530846   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:00:23.535519   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 14:00:23.540490   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:00:23.545839   13860 ssh_runner.go:195] Run: which cri-dockerd
	I0327 14:00:23.547014   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 14:00:23.549654   13860 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 14:00:23.554298   13860 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 14:00:23.643067   13860 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 14:00:23.735371   13860 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 14:00:23.735429   13860 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 14:00:23.741502   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:23.896617   13860 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:00:36.648687   13860 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.752215042s)
	I0327 14:00:36.648759   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 14:00:36.654696   13860 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 14:00:36.663467   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:00:36.668251   13860 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 14:00:36.740961   13860 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 14:00:36.804950   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:36.866776   13860 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 14:00:36.872685   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:00:36.877213   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:36.941325   13860 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 14:00:36.980486   13860 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 14:00:36.980565   13860 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 14:00:36.982778   13860 start.go:562] Will wait 60s for crictl version
	I0327 14:00:36.982815   13860 ssh_runner.go:195] Run: which crictl
	I0327 14:00:36.984364   13860 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 14:00:36.996300   13860 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 14:00:36.996360   13860 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:00:37.009156   13860 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:00:37.025782   13860 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 14:00:37.025908   13860 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 14:00:37.027204   13860 kubeadm.go:877] updating cluster {Name:running-upgrade-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-823000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 14:00:37.027249   13860 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:00:37.027289   13860 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:00:37.038029   13860 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:00:37.038036   13860 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:00:37.038079   13860 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:00:37.041037   13860 ssh_runner.go:195] Run: which lz4
	I0327 14:00:37.042219   13860 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 14:00:37.043432   13860 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 14:00:37.043444   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 14:00:37.692733   13860 docker.go:649] duration metric: took 650.549ms to copy over tarball
	I0327 14:00:37.692796   13860 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 14:00:38.788428   13860 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.09563275s)
	I0327 14:00:38.788443   13860 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 14:00:38.804117   13860 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:00:38.807455   13860 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 14:00:38.812461   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:38.886637   13860 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:00:40.423199   13860 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.536563208s)
	I0327 14:00:40.423301   13860 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:00:40.439768   13860 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:00:40.439776   13860 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:00:40.439780   13860 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 14:00:40.447499   13860 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:00:40.447582   13860 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:00:40.447760   13860 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 14:00:40.447867   13860 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:00:40.448172   13860 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:00:40.448298   13860 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:00:40.448330   13860 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:00:40.448794   13860 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:00:40.455766   13860 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:00:40.457083   13860 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:00:40.457160   13860 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:00:40.457198   13860 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:00:40.457212   13860 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:00:40.457353   13860 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:00:40.457405   13860 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 14:00:40.457447   13860 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:00:42.404061   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:00:42.439428   13860 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 14:00:42.439500   13860 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:00:42.439618   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:00:42.460593   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 14:00:42.479092   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 14:00:42.494266   13860 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 14:00:42.494287   13860 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 14:00:42.494342   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 14:00:42.506770   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 14:00:42.506900   13860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 14:00:42.511104   13860 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 14:00:42.511114   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 14:00:42.518687   13860 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 14:00:42.518696   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0327 14:00:42.526367   13860 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 14:00:42.526515   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:00:42.532736   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 14:00:42.541545   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:00:42.552000   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:00:42.555215   13860 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0327 14:00:42.555639   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:00:42.557521   13860 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 14:00:42.557537   13860 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:00:42.557569   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:00:42.569452   13860 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 14:00:42.569474   13860 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:00:42.569530   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 14:00:42.572885   13860 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 14:00:42.572905   13860 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:00:42.572950   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:00:42.596360   13860 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 14:00:42.596387   13860 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:00:42.596417   13860 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 14:00:42.596427   13860 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:00:42.596446   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:00:42.596452   13860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:00:42.596503   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 14:00:42.596583   13860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:00:42.605193   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 14:00:42.605191   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 14:00:42.616877   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 14:00:42.616890   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 14:00:42.616949   13860 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 14:00:42.616968   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 14:00:42.650583   13860 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:00:42.650597   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 14:00:42.690760   13860 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0327 14:00:43.092716   13860 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 14:00:43.093259   13860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:00:43.132260   13860 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 14:00:43.132322   13860 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:00:43.132421   13860 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:00:43.946636   13860 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 14:00:43.947095   13860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:00:43.952477   13860 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 14:00:43.952570   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 14:00:44.007474   13860 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:00:44.007491   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 14:00:44.249125   13860 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 14:00:44.249165   13860 cache_images.go:92] duration metric: took 3.809426292s to LoadCachedImages
	W0327 14:00:44.249203   13860 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0327 14:00:44.249216   13860 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 14:00:44.249272   13860 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-823000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 14:00:44.249343   13860 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 14:00:44.265494   13860 cni.go:84] Creating CNI manager for ""
	I0327 14:00:44.265507   13860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:00:44.265513   13860 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 14:00:44.265521   13860 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-823000 NodeName:running-upgrade-823000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 14:00:44.265583   13860 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-823000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 14:00:44.265648   13860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 14:00:44.268706   13860 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 14:00:44.268737   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 14:00:44.271951   13860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 14:00:44.277131   13860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 14:00:44.282493   13860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 14:00:44.287979   13860 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 14:00:44.289435   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:00:44.343512   13860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:00:44.348663   13860 certs.go:68] Setting up /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000 for IP: 10.0.2.15
	I0327 14:00:44.348669   13860 certs.go:194] generating shared ca certs ...
	I0327 14:00:44.348677   13860 certs.go:226] acquiring lock for ca certs: {Name:mkbfc84e619c8d37a470429cb64ebb1efb05c6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:00:44.348898   13860 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key
	I0327 14:00:44.348949   13860 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key
	I0327 14:00:44.348956   13860 certs.go:256] generating profile certs ...
	I0327 14:00:44.349027   13860 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.key
	I0327 14:00:44.349045   13860 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key.5168b2f4
	I0327 14:00:44.349055   13860 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt.5168b2f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 14:00:44.475330   13860 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt.5168b2f4 ...
	I0327 14:00:44.475336   13860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt.5168b2f4: {Name:mk9e76159085bd7b1edd2165cb068f4e413a9bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:00:44.475597   13860 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key.5168b2f4 ...
	I0327 14:00:44.475602   13860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key.5168b2f4: {Name:mke1569c1edf0d8a1c80cbf740291dd2e3872289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:00:44.475747   13860 certs.go:381] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt.5168b2f4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt
	I0327 14:00:44.475871   13860 certs.go:385] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key.5168b2f4 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key
	I0327 14:00:44.476007   13860 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/proxy-client.key
	I0327 14:00:44.476138   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem (1338 bytes)
	W0327 14:00:44.476166   13860 certs.go:480] ignoring /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752_empty.pem, impossibly tiny 0 bytes
	I0327 14:00:44.476175   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 14:00:44.476200   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem (1078 bytes)
	I0327 14:00:44.476223   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem (1123 bytes)
	I0327 14:00:44.476249   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem (1675 bytes)
	I0327 14:00:44.476305   13860 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:00:44.476627   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 14:00:44.483965   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 14:00:44.491528   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 14:00:44.499056   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 14:00:44.506145   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 14:00:44.512660   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 14:00:44.519533   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 14:00:44.526822   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 14:00:44.533930   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem --> /usr/share/ca-certificates/11752.pem (1338 bytes)
	I0327 14:00:44.540608   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /usr/share/ca-certificates/117522.pem (1708 bytes)
	I0327 14:00:44.547473   13860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 14:00:44.554082   13860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 14:00:44.558923   13860 ssh_runner.go:195] Run: openssl version
	I0327 14:00:44.560658   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117522.pem && ln -fs /usr/share/ca-certificates/117522.pem /etc/ssl/certs/117522.pem"
	I0327 14:00:44.563630   13860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117522.pem
	I0327 14:00:44.565031   13860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 20:47 /usr/share/ca-certificates/117522.pem
	I0327 14:00:44.565049   13860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117522.pem
	I0327 14:00:44.566708   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117522.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 14:00:44.569548   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 14:00:44.572332   13860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:00:44.573685   13860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 21:00 /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:00:44.573710   13860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:00:44.575376   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 14:00:44.578392   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11752.pem && ln -fs /usr/share/ca-certificates/11752.pem /etc/ssl/certs/11752.pem"
	I0327 14:00:44.581277   13860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11752.pem
	I0327 14:00:44.582637   13860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 20:47 /usr/share/ca-certificates/11752.pem
	I0327 14:00:44.582656   13860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11752.pem
	I0327 14:00:44.584396   13860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11752.pem /etc/ssl/certs/51391683.0"
	I0327 14:00:44.587484   13860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 14:00:44.588861   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 14:00:44.590641   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 14:00:44.592429   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 14:00:44.594135   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 14:00:44.595881   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 14:00:44.597746   13860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 14:00:44.599488   13860 kubeadm.go:391] StartCluster: {Name:running-upgrade-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-823000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:00:44.599554   13860 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:00:44.609735   13860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 14:00:44.612867   13860 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 14:00:44.612873   13860 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 14:00:44.612875   13860 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 14:00:44.612897   13860 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 14:00:44.616449   13860 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:00:44.616484   13860 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-823000" does not appear in /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:00:44.616502   13860 kubeconfig.go:62] /Users/jenkins/minikube-integration/18158-11341/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-823000" cluster setting kubeconfig missing "running-upgrade-823000" context setting]
	I0327 14:00:44.616676   13860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:00:44.617298   13860 kapi.go:59] client config for running-upgrade-823000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102607020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:00:44.618074   13860 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 14:00:44.620819   13860 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-823000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 14:00:44.620824   13860 kubeadm.go:1154] stopping kube-system containers ...
	I0327 14:00:44.620861   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:00:44.631952   13860 docker.go:483] Stopping containers: [d1e15ce1558f d1f4aa01a25a 422db24cfe16 bc9270661ad6 aff5fc4dd6cf c187be62fbcb 946f9f6ee48b e8e3bd7483ba 8d24b960bfb7 1bc524f47795 dbd62fe532b2 7f6f92d5f589 6fbeb1995f2f 5723fdc0266e 5f71f33816ad]
	I0327 14:00:44.632018   13860 ssh_runner.go:195] Run: docker stop d1e15ce1558f d1f4aa01a25a 422db24cfe16 bc9270661ad6 aff5fc4dd6cf c187be62fbcb 946f9f6ee48b e8e3bd7483ba 8d24b960bfb7 1bc524f47795 dbd62fe532b2 7f6f92d5f589 6fbeb1995f2f 5723fdc0266e 5f71f33816ad
	I0327 14:00:44.643503   13860 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 14:00:44.748171   13860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:00:44.753028   13860 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 27 21:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 27 21:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 27 21:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 27 21:00 /etc/kubernetes/scheduler.conf
	
	I0327 14:00:44.753069   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf
	I0327 14:00:44.756767   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:00:44.756801   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:00:44.760407   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf
	I0327 14:00:44.763999   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:00:44.764024   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:00:44.767497   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf
	I0327 14:00:44.771160   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:00:44.771187   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:00:44.774586   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf
	I0327 14:00:44.777403   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:00:44.777422   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:00:44.779991   13860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:00:44.783128   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:00:44.804373   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:00:45.251048   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:00:45.478920   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:00:45.520794   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:00:45.545378   13860 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:00:45.545450   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:00:46.047738   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:00:46.547495   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:00:46.557169   13860 api_server.go:72] duration metric: took 1.011805s to wait for apiserver process to appear ...
	I0327 14:00:46.557178   13860 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:00:46.557210   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:00:51.559284   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:00:51.559326   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:00:56.559707   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:00:56.559782   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:01.560685   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:01.560754   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:06.561812   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:06.561897   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:11.563369   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:11.563440   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:16.565361   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:16.565451   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:21.566918   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:21.567019   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:26.569627   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:26.569706   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:31.572227   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:31.572304   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:36.573367   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:36.573447   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:41.576107   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:41.576190   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:46.578660   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:46.578944   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:01:46.601773   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:01:46.601908   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:01:46.617634   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:01:46.617733   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:01:46.630049   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:01:46.630127   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:01:46.640842   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:01:46.640906   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:01:46.651033   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:01:46.651088   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:01:46.662287   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:01:46.662355   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:01:46.672389   13860 logs.go:276] 0 containers: []
	W0327 14:01:46.672399   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:01:46.672460   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:01:46.683398   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:01:46.683413   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:01:46.683419   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:01:46.728443   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:01:46.728454   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:01:46.754103   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:01:46.754115   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:01:46.768426   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:01:46.768438   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:01:46.780492   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:01:46.780506   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:01:46.792329   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:01:46.792342   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:01:46.809266   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:01:46.809277   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:01:46.835892   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:01:46.835899   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:01:46.850046   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:01:46.850056   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:01:46.864845   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:01:46.864855   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:01:46.877102   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:01:46.877113   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:01:46.888672   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:01:46.888682   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:01:46.899607   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:01:46.899619   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:01:46.938601   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:01:46.938609   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:01:46.942998   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:01:46.943009   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:01:47.017141   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:01:47.017152   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:01:47.030773   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:01:47.030784   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:01:49.543938   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:01:54.546741   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:01:54.547180   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:01:54.588093   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:01:54.588223   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:01:54.613865   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:01:54.613967   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:01:54.628582   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:01:54.628671   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:01:54.640550   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:01:54.640619   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:01:54.651256   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:01:54.651340   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:01:54.662369   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:01:54.662440   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:01:54.672622   13860 logs.go:276] 0 containers: []
	W0327 14:01:54.672635   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:01:54.672686   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:01:54.683230   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:01:54.683246   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:01:54.683251   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:01:54.696901   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:01:54.696912   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:01:54.707667   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:01:54.707677   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:01:54.719564   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:01:54.719573   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:01:54.723729   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:01:54.723734   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:01:54.737779   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:01:54.737789   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:01:54.779785   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:01:54.779797   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:01:54.819423   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:01:54.819435   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:01:54.854781   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:01:54.854791   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:01:54.876423   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:01:54.876435   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:01:54.888016   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:01:54.888027   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:01:54.902755   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:01:54.902764   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:01:54.914637   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:01:54.914646   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:01:54.931753   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:01:54.931764   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:01:54.945469   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:01:54.945480   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:01:54.958962   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:01:54.958972   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:01:54.986131   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:01:54.986138   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:01:57.499663   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:02.500839   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:02.501280   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:02.561043   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:02.563402   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:02.580687   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:02.580789   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:02.593310   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:02.593376   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:02.605331   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:02.605408   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:02.615665   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:02.615745   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:02.626461   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:02.626528   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:02.636547   13860 logs.go:276] 0 containers: []
	W0327 14:02:02.636556   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:02.636628   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:02.646358   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:02.646373   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:02.646378   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:02.682486   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:02.682500   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:02.719305   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:02.719315   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:02.733538   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:02.733548   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:02.745312   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:02.745324   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:02.757057   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:02.757066   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:02.798326   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:02.798334   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:02.824583   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:02.824590   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:02.838125   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:02.838135   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:02.849536   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:02.849547   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:02.865782   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:02.865794   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:02.882866   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:02.882875   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:02.900390   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:02.900401   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:02.911562   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:02.911572   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:02.923505   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:02.923518   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:02.928148   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:02.928154   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:02.939815   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:02.939826   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:05.456680   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:10.458682   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:10.458940   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:10.491547   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:10.491665   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:10.510889   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:10.510982   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:10.529029   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:10.529097   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:10.540957   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:10.541043   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:10.561385   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:10.561444   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:10.577781   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:10.577839   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:10.587917   13860 logs.go:276] 0 containers: []
	W0327 14:02:10.587929   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:10.587984   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:10.598162   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:10.598180   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:10.598187   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:10.611608   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:10.611622   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:10.624407   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:10.624422   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:10.629234   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:10.629243   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:10.665984   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:10.665994   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:10.680094   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:10.680105   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:10.694370   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:10.694379   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:10.705022   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:10.705033   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:10.716926   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:10.716937   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:10.733260   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:10.733271   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:10.744786   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:10.744797   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:10.786136   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:10.786155   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:10.823098   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:10.823110   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:10.841675   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:10.841684   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:10.865933   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:10.865939   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:10.879347   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:10.879357   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:10.890774   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:10.890784   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:13.411661   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:18.412447   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:18.412683   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:18.438721   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:18.438836   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:18.457292   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:18.457370   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:18.470219   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:18.470279   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:18.481691   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:18.481758   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:18.492317   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:18.492379   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:18.503774   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:18.503834   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:18.514702   13860 logs.go:276] 0 containers: []
	W0327 14:02:18.514713   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:18.514769   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:18.530160   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:18.530178   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:18.530184   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:18.578102   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:18.578113   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:18.592604   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:18.592615   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:18.607790   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:18.607801   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:18.619070   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:18.619080   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:18.631580   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:18.631593   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:18.668576   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:18.668586   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:18.686545   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:18.686556   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:18.707699   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:18.707711   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:18.725392   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:18.725402   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:18.751191   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:18.751198   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:18.755863   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:18.755868   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:18.767591   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:18.767603   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:18.782662   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:18.782675   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:18.797208   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:18.797221   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:18.838781   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:18.838790   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:18.851209   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:18.851221   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:21.364867   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:26.366840   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:26.367307   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:26.403452   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:26.403590   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:26.428821   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:26.428928   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:26.442935   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:26.443004   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:26.454598   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:26.454665   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:26.464897   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:26.464961   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:26.475429   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:26.475488   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:26.485626   13860 logs.go:276] 0 containers: []
	W0327 14:02:26.485638   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:26.485691   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:26.496069   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:26.496086   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:26.496092   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:26.536827   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:26.536839   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:26.574126   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:26.574138   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:26.588080   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:26.588091   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:26.599884   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:26.599896   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:26.625834   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:26.625841   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:26.665923   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:26.665931   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:26.679557   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:26.679568   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:26.690547   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:26.690561   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:26.706635   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:26.706645   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:26.720713   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:26.720722   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:26.732339   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:26.732351   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:26.746463   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:26.746472   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:26.759710   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:26.759724   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:26.764009   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:26.764014   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:26.781401   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:26.781413   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:26.800051   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:26.800060   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:29.313886   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:34.316002   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:34.316130   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:34.332103   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:34.332188   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:34.348216   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:34.348299   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:34.359104   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:34.359178   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:34.371319   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:34.371391   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:34.386396   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:34.386463   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:34.396969   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:34.397040   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:34.407617   13860 logs.go:276] 0 containers: []
	W0327 14:02:34.407629   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:34.407691   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:34.420204   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:34.420226   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:34.420232   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:34.435401   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:34.435414   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:34.451195   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:34.451210   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:34.466282   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:34.466302   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:34.478512   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:34.478524   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:34.483577   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:34.483588   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:34.495983   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:34.495997   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:34.511344   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:34.511355   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:34.530617   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:34.530630   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:34.547847   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:34.547860   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:34.560188   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:34.560198   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:34.574325   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:34.574335   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:34.586023   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:34.586034   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:34.610702   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:34.610708   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:34.648257   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:34.648270   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:34.682065   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:34.682077   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:34.694317   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:34.694328   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:37.235843   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:42.237704   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:42.237799   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:42.248639   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:42.248717   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:42.258913   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:42.258983   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:42.269489   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:42.269552   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:42.280851   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:42.280935   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:42.291933   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:42.292008   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:42.306049   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:42.306119   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:42.321550   13860 logs.go:276] 0 containers: []
	W0327 14:02:42.321561   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:42.321629   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:42.333522   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:42.333538   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:42.333545   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:42.348932   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:42.348942   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:42.363366   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:42.363388   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:42.375173   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:42.375183   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:42.387401   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:42.387415   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:42.428535   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:42.428542   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:42.461852   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:42.461864   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:42.506782   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:42.506795   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:42.523843   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:42.523854   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:42.535152   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:42.535164   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:42.548557   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:42.548567   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:42.560572   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:42.561273   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:42.577735   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:42.577746   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:42.592043   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:42.592053   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:42.603680   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:42.603692   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:42.608608   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:42.608615   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:42.619928   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:42.619939   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:45.146865   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:50.149097   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:50.149235   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:50.161087   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:50.161175   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:50.172056   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:50.172130   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:50.182899   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:50.182962   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:50.193708   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:50.193772   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:50.204016   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:50.204086   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:50.214865   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:50.214938   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:50.226999   13860 logs.go:276] 0 containers: []
	W0327 14:02:50.227009   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:50.227068   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:50.241144   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:50.241162   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:50.241168   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:50.255373   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:50.255382   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:50.269804   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:50.269815   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:50.282403   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:50.282417   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:50.306187   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:50.306200   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:50.331350   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:50.331358   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:50.370522   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:50.370536   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:50.389586   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:50.389600   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:02:50.402194   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:50.402205   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:50.443186   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:50.443206   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:50.455144   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:50.455156   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:50.469804   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:50.469815   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:50.481629   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:50.481639   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:50.498909   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:50.498919   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:50.515020   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:50.515032   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:50.527503   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:50.527516   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:50.570783   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:50.570799   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:53.086935   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:02:58.089146   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:02:58.089344   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:02:58.101133   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:02:58.101205   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:02:58.112624   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:02:58.112703   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:02:58.124578   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:02:58.124646   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:02:58.136447   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:02:58.136518   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:02:58.147410   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:02:58.147478   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:02:58.158431   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:02:58.158502   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:02:58.169140   13860 logs.go:276] 0 containers: []
	W0327 14:02:58.169149   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:02:58.169204   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:02:58.181024   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:02:58.181042   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:02:58.181048   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:02:58.194857   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:02:58.194868   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:02:58.231312   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:02:58.231323   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:02:58.249290   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:02:58.249303   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:02:58.263537   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:02:58.263548   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:02:58.278600   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:02:58.278614   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:02:58.292911   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:02:58.292922   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:02:58.297719   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:02:58.297727   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:02:58.336051   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:02:58.336065   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:02:58.347541   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:02:58.347552   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:02:58.359343   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:02:58.359354   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:02:58.371552   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:02:58.371564   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:02:58.389659   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:02:58.389672   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:02:58.401559   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:02:58.401570   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:02:58.427197   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:02:58.427208   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:02:58.469051   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:02:58.469062   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:02:58.486957   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:02:58.486968   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:01.000950   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:06.001179   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:06.001296   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:06.012484   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:06.012559   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:06.025379   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:06.025454   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:06.037702   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:06.037777   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:06.050498   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:06.050576   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:06.062674   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:06.062747   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:06.074688   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:06.074761   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:06.085512   13860 logs.go:276] 0 containers: []
	W0327 14:03:06.085525   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:06.085590   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:06.098739   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:06.098760   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:06.098766   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:06.112889   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:06.112902   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:06.128458   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:06.128474   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:06.140999   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:06.141011   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:06.156269   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:06.156281   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:06.173140   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:06.173158   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:06.200324   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:06.200337   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:06.214475   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:06.214488   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:06.227928   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:06.227944   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:06.275198   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:06.275225   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:06.290618   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:06.290634   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:06.331278   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:06.331298   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:06.343714   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:06.343724   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:06.381085   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:06.381097   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:06.396578   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:06.396590   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:06.408777   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:06.408792   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:06.433413   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:06.433425   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:08.939872   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:13.942621   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:13.942870   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:13.966224   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:13.966334   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:13.982180   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:13.982267   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:13.995039   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:13.995115   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:14.006149   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:14.006218   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:14.019336   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:14.019408   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:14.031838   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:14.031912   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:14.041579   13860 logs.go:276] 0 containers: []
	W0327 14:03:14.041589   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:14.041645   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:14.051940   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:14.051955   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:14.051961   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:14.063791   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:14.063803   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:14.074817   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:14.074827   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:14.086609   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:14.086619   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:14.124097   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:14.124110   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:14.140410   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:14.140420   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:14.152447   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:14.152461   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:14.169294   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:14.169304   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:14.208197   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:14.208204   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:14.212462   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:14.212470   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:14.249521   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:14.249532   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:14.263766   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:14.263776   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:14.274603   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:14.274615   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:14.288153   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:14.288163   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:14.302101   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:14.302112   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:14.315818   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:14.315828   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:14.326945   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:14.326955   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:16.853555   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:21.856029   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:21.856633   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:21.894010   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:21.894139   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:21.914758   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:21.914889   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:21.929626   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:21.929698   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:21.941758   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:21.941838   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:21.953254   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:21.953322   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:21.963935   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:21.964035   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:21.974726   13860 logs.go:276] 0 containers: []
	W0327 14:03:21.974736   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:21.974797   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:21.984995   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:21.985018   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:21.985024   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:21.996707   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:21.996717   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:22.008474   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:22.008487   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:22.019871   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:22.019883   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:22.055851   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:22.055862   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:22.092813   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:22.092825   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:22.104434   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:22.104443   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:22.118257   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:22.118269   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:22.157570   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:22.157577   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:22.170956   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:22.170966   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:22.188603   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:22.188613   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:22.202282   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:22.202293   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:22.227053   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:22.227068   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:22.239513   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:22.239523   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:22.244099   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:22.244107   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:22.258942   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:22.258954   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:22.274433   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:22.274445   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:24.795008   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:29.797205   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:29.797320   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:29.812153   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:29.812230   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:29.823757   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:29.823823   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:29.835626   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:29.835701   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:29.846637   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:29.846707   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:29.857218   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:29.857279   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:29.867786   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:29.867851   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:29.877405   13860 logs.go:276] 0 containers: []
	W0327 14:03:29.877415   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:29.877472   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:29.888373   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:29.888390   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:29.888395   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:29.927030   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:29.927041   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:29.967409   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:29.967432   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:29.981270   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:29.981286   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:30.000588   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:30.000605   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:30.017454   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:30.017471   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:30.030690   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:30.030705   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:30.075054   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:30.075076   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:30.091199   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:30.091216   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:30.104348   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:30.104361   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:30.119387   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:30.119400   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:30.147364   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:30.147383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:30.163174   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:30.163189   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:30.168451   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:30.168463   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:30.184202   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:30.184215   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:30.198132   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:30.198143   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:30.216936   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:30.216958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:32.733962   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:37.736500   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:37.736929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:37.770401   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:37.770526   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:37.789917   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:37.790006   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:37.804220   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:37.804298   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:37.816450   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:37.816518   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:37.827310   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:37.827379   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:37.838254   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:37.838328   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:37.848161   13860 logs.go:276] 0 containers: []
	W0327 14:03:37.848173   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:37.848226   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:37.861328   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:37.861348   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:37.861353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:37.873502   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:37.873510   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:37.889638   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:37.889651   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:37.901571   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:37.901585   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:37.942703   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:37.942710   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:37.958920   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:37.958931   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:37.975621   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:37.975634   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:37.989878   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:37.989890   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:37.994121   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:37.994129   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:38.007487   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:38.007499   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:38.021860   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:38.021873   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:38.033864   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:38.033874   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:38.073692   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:38.073863   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:38.113755   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:38.113766   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:38.125284   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:38.125294   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:38.147767   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:38.147774   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:38.159142   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:38.159152   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:40.680974   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:45.683288   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:45.683423   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:45.695607   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:45.695680   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:45.706414   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:45.706489   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:45.717110   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:45.717187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:45.727986   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:45.728056   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:45.745343   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:45.745410   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:45.756055   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:45.756127   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:45.766075   13860 logs.go:276] 0 containers: []
	W0327 14:03:45.766086   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:45.766146   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:45.776718   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:45.776738   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:45.776743   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:45.812162   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:45.812176   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:45.832282   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:45.832295   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:45.845190   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:45.845201   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:45.864240   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:45.864254   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:45.890016   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:45.890031   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:45.904327   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:45.904340   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:45.921996   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:45.922010   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:45.933759   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:45.933771   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:45.950339   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:45.950352   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:45.962263   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:45.962276   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:45.973707   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:45.973718   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:46.015889   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:46.015903   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:46.030520   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:46.030532   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:46.042150   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:46.042162   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:46.046303   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:46.046309   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:46.085357   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:46.085367   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:48.599424   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:53.602005   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:53.602187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:53.619769   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:53.619859   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:53.632525   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:53.632597   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:53.643321   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:53.643393   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:53.653711   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:53.653799   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:53.663856   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:53.663929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:53.674842   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:53.674902   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:53.684862   13860 logs.go:276] 0 containers: []
	W0327 14:03:53.684871   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:53.684926   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:53.695511   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:53.695528   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:53.695533   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:53.710330   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:53.710342   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:53.723263   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:53.723273   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:53.740391   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:53.740401   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:53.759140   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:53.759149   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:53.763950   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:53.763958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:53.801179   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:53.801195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:53.812952   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:53.812963   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:53.835558   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:53.835565   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:53.874729   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:53.874740   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:53.888032   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:53.888045   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:53.898920   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:53.898933   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:53.912927   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:53.912939   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:53.924253   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:53.924265   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:53.936336   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:53.936346   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:53.948090   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:53.948103   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:53.987238   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:53.987249   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:56.502248   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:01.504465   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:01.504582   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:01.517247   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:01.517315   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:01.529143   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:01.529211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:01.539826   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:01.539888   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:01.550692   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:01.550754   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:01.562216   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:01.562303   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:01.573081   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:01.573152   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:01.588248   13860 logs.go:276] 0 containers: []
	W0327 14:04:01.588260   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:01.588322   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:01.599286   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:01.599328   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:01.599334   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:01.616135   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:01.616146   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:01.630793   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:01.630809   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:01.642632   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:01.642647   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:01.658771   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:01.658785   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:01.676070   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:01.676084   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:01.687897   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:01.687911   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:01.699540   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:01.699551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:01.713234   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:01.713248   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:01.733801   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:01.733811   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:01.746971   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:01.746985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:01.784461   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:01.784475   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:01.796142   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:01.796156   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:01.800448   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:01.800455   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:01.836371   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:01.836384   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:01.850279   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:01.850291   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:01.873181   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:01.873188   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:04.412216   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:09.413549   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:09.413657   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:09.424730   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:09.424798   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:09.439347   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:09.439419   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:09.450071   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:09.450139   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:09.461180   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:09.461253   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:09.474655   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:09.474731   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:09.489035   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:09.489114   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:09.500668   13860 logs.go:276] 0 containers: []
	W0327 14:04:09.500680   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:09.500749   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:09.519535   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:09.519552   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:09.519558   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:09.571573   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:09.571588   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:09.583708   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:09.583719   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:09.595730   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:09.595741   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:09.616632   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:09.616643   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:09.639304   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:09.639312   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:09.659537   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:09.659548   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:09.674185   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:09.674197   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:09.688295   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:09.688306   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:09.702411   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:09.702420   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:09.744857   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:09.744869   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:09.782067   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:09.782080   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:09.799259   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:09.799269   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:09.811535   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:09.811546   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:09.823124   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:09.823134   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:09.835421   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:09.835435   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:09.840070   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:09.840076   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:12.356767   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:17.359052   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:17.359457   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:17.391268   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:17.391402   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:17.409778   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:17.409877   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:17.428594   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:17.428660   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:17.439683   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:17.439758   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:17.450095   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:17.450166   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:17.460349   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:17.460423   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:17.471027   13860 logs.go:276] 0 containers: []
	W0327 14:04:17.471037   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:17.471094   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:17.481632   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:17.481654   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:17.481659   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:17.500033   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:17.500044   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:17.512280   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:17.512291   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:17.554204   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:17.554221   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:17.558730   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:17.559453   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:17.598678   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:17.598693   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:17.610878   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:17.610889   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:17.622849   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:17.622861   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:17.659154   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:17.659165   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:17.673194   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:17.673206   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:17.688448   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:17.688460   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:17.705385   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:17.705398   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:17.719282   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:17.719293   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:17.740838   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:17.740850   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:17.770049   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:17.770059   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:17.792840   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:17.792850   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:17.804569   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:17.804582   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:20.318975   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:25.321410   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:25.321753   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:25.350504   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:25.350635   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:25.368642   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:25.368729   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:25.382215   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:25.382283   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:25.393886   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:25.393965   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:25.405586   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:25.405664   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:25.416762   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:25.416834   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:25.430861   13860 logs.go:276] 0 containers: []
	W0327 14:04:25.430872   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:25.430933   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:25.441363   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:25.441378   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:25.441384   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:25.455871   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:25.455882   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:25.466805   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:25.466816   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:25.478979   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:25.478988   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:25.520674   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:25.520685   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:25.537922   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:25.537932   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:25.550249   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:25.550261   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:25.569288   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:25.569298   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:25.583919   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:25.583930   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:25.588698   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:25.588704   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:25.628815   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:25.628826   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:25.652713   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:25.652728   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:25.692783   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:25.692795   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:25.707017   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:25.707029   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:25.721549   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:25.721562   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:25.733107   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:25.733130   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:25.747868   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:25.747880   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:28.261736   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:33.263627   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:33.263998   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:33.294584   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:33.294713   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:33.312461   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:33.312558   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:33.326999   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:33.327086   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:33.339122   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:33.339196   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:33.350601   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:33.350666   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:33.360975   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:33.361041   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:33.371210   13860 logs.go:276] 0 containers: []
	W0327 14:04:33.371220   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:33.371277   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:33.381752   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:33.381769   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:33.381775   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:33.422340   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:33.422360   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:33.439259   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:33.439270   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:33.457050   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:33.457063   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:33.468939   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:33.468952   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:33.484282   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:33.484293   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:33.488877   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:33.488883   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:33.523902   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:33.523913   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:33.538484   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:33.538496   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:33.552861   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:33.552873   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:33.564364   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:33.564374   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:33.606026   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:33.606039   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:33.623482   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:33.623492   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:33.637048   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:33.637061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:33.650655   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:33.650666   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:33.662845   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:33.662858   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:33.692169   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:33.692185   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:36.220347   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:41.222705   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:41.222924   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:41.235189   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:41.235268   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:41.245698   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:41.245767   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:41.256345   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:41.256410   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:41.267317   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:41.267397   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:41.277758   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:41.277826   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:41.288291   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:41.288356   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:41.298178   13860 logs.go:276] 0 containers: []
	W0327 14:04:41.298190   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:41.298254   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:41.308502   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:41.308519   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:41.308525   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:41.347237   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:41.347247   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:41.361797   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:41.361807   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:41.372859   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:41.372869   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:41.377219   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:41.377228   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:41.410357   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:41.410368   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:41.421427   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:41.421440   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:41.439047   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:41.439057   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:41.463291   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:41.463300   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:41.477473   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:41.477486   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:41.492051   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:41.492061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:41.503860   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:41.503871   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:41.518281   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:41.518292   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:41.530096   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:41.530107   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:41.571060   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:41.571070   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:41.587597   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:41.587609   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:41.604423   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:41.604435   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:44.122711   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:49.125268   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:49.125437   13860 kubeadm.go:591] duration metric: took 4m4.519984334s to restartPrimaryControlPlane
	W0327 14:04:49.125575   13860 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 14:04:49.125634   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 14:04:50.128270   13860 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002635125s)
	I0327 14:04:50.128335   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 14:04:50.133346   13860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:04:50.136543   13860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:04:50.139382   13860 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:04:50.139388   13860 kubeadm.go:156] found existing configuration files:
	
	I0327 14:04:50.139407   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf
	I0327 14:04:50.142171   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:04:50.142198   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:04:50.144772   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf
	I0327 14:04:50.147756   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:04:50.147783   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:04:50.150694   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf
	I0327 14:04:50.153051   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:04:50.153073   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:04:50.156001   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf
	I0327 14:04:50.159185   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:04:50.159206   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:04:50.161673   13860 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 14:04:50.178840   13860 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 14:04:50.178877   13860 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 14:04:50.224950   13860 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 14:04:50.225015   13860 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 14:04:50.225061   13860 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 14:04:50.280960   13860 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 14:04:50.285033   13860 out.go:204]   - Generating certificates and keys ...
	I0327 14:04:50.285068   13860 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 14:04:50.285105   13860 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 14:04:50.285164   13860 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 14:04:50.285197   13860 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 14:04:50.285237   13860 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 14:04:50.285267   13860 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 14:04:50.285309   13860 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 14:04:50.285352   13860 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 14:04:50.285386   13860 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 14:04:50.285433   13860 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 14:04:50.285456   13860 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 14:04:50.285488   13860 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 14:04:50.444152   13860 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 14:04:50.578539   13860 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 14:04:50.742162   13860 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 14:04:50.834903   13860 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 14:04:50.864410   13860 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 14:04:50.864821   13860 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 14:04:50.864925   13860 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 14:04:50.949614   13860 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 14:04:50.953859   13860 out.go:204]   - Booting up control plane ...
	I0327 14:04:50.953915   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 14:04:50.954003   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 14:04:50.954075   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 14:04:50.954123   13860 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 14:04:50.954245   13860 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 14:04:55.463602   13860 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.509532 seconds
	I0327 14:04:55.463671   13860 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 14:04:55.468055   13860 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 14:04:55.976603   13860 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 14:04:55.976743   13860 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-823000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 14:04:56.480495   13860 kubeadm.go:309] [bootstrap-token] Using token: k1fncm.sxv8egnmuwflk0mk
	I0327 14:04:56.482160   13860 out.go:204]   - Configuring RBAC rules ...
	I0327 14:04:56.482221   13860 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 14:04:56.482797   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 14:04:56.489152   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 14:04:56.490073   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 14:04:56.491054   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 14:04:56.491910   13860 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 14:04:56.496006   13860 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 14:04:56.645989   13860 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 14:04:56.885052   13860 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 14:04:56.885784   13860 kubeadm.go:309] 
	I0327 14:04:56.885825   13860 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 14:04:56.885829   13860 kubeadm.go:309] 
	I0327 14:04:56.885902   13860 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 14:04:56.885941   13860 kubeadm.go:309] 
	I0327 14:04:56.885959   13860 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 14:04:56.885997   13860 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 14:04:56.886076   13860 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 14:04:56.886096   13860 kubeadm.go:309] 
	I0327 14:04:56.886128   13860 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 14:04:56.886131   13860 kubeadm.go:309] 
	I0327 14:04:56.886156   13860 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 14:04:56.886161   13860 kubeadm.go:309] 
	I0327 14:04:56.886196   13860 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 14:04:56.886272   13860 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 14:04:56.886344   13860 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 14:04:56.886350   13860 kubeadm.go:309] 
	I0327 14:04:56.886425   13860 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 14:04:56.886542   13860 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 14:04:56.886546   13860 kubeadm.go:309] 
	I0327 14:04:56.886587   13860 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token k1fncm.sxv8egnmuwflk0mk \
	I0327 14:04:56.886632   13860 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf \
	I0327 14:04:56.886644   13860 kubeadm.go:309] 	--control-plane 
	I0327 14:04:56.886646   13860 kubeadm.go:309] 
	I0327 14:04:56.886695   13860 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 14:04:56.886697   13860 kubeadm.go:309] 
	I0327 14:04:56.886742   13860 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token k1fncm.sxv8egnmuwflk0mk \
	I0327 14:04:56.886798   13860 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf 
	I0327 14:04:56.886847   13860 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 14:04:56.886858   13860 cni.go:84] Creating CNI manager for ""
	I0327 14:04:56.886866   13860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:04:56.892669   13860 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 14:04:56.899577   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 14:04:56.902426   13860 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 14:04:56.907113   13860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 14:04:56.907195   13860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-823000 minikube.k8s.io/updated_at=2024_03_27T14_04_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=running-upgrade-823000 minikube.k8s.io/primary=true
	I0327 14:04:56.907226   13860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 14:04:56.910537   13860 ops.go:34] apiserver oom_adj: -16
	I0327 14:04:56.950798   13860 kubeadm.go:1107] duration metric: took 43.665667ms to wait for elevateKubeSystemPrivileges
	W0327 14:04:56.950905   13860 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 14:04:56.950911   13860 kubeadm.go:393] duration metric: took 4m12.358969833s to StartCluster
	I0327 14:04:56.950920   13860 settings.go:142] acquiring lock: {Name:mkdd1901c274fdaab611fbdc96cb9f09e61b9c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:04:56.951086   13860 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:04:56.951478   13860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:04:56.951689   13860 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:04:56.955728   13860 out.go:177] * Verifying Kubernetes components...
	I0327 14:04:56.951707   13860 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 14:04:56.951864   13860 config.go:182] Loaded profile config "running-upgrade-823000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:04:56.963662   13860 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-823000"
	I0327 14:04:56.963675   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:04:56.963677   13860 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-823000"
	W0327 14:04:56.963680   13860 addons.go:243] addon storage-provisioner should already be in state true
	I0327 14:04:56.963677   13860 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-823000"
	I0327 14:04:56.963695   13860 host.go:66] Checking if "running-upgrade-823000" exists ...
	I0327 14:04:56.963701   13860 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-823000"
	I0327 14:04:56.964915   13860 kapi.go:59] client config for running-upgrade-823000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102607020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:04:56.965728   13860 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-823000"
	W0327 14:04:56.965735   13860 addons.go:243] addon default-storageclass should already be in state true
	I0327 14:04:56.965742   13860 host.go:66] Checking if "running-upgrade-823000" exists ...
	I0327 14:04:56.970702   13860 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:04:56.973576   13860 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:04:56.973582   13860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 14:04:56.973588   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:04:56.974245   13860 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 14:04:56.974250   13860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 14:04:56.974253   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:04:57.042839   13860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:04:57.047769   13860 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:04:57.047805   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:04:57.052215   13860 api_server.go:72] duration metric: took 100.51625ms to wait for apiserver process to appear ...
	I0327 14:04:57.052224   13860 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:04:57.052231   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:57.081328   13860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:04:57.081902   13860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 14:05:02.052845   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:02.052878   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:07.054167   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:07.054209   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:12.054423   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:12.054448   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:17.054643   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:17.054667   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:22.054963   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:22.055011   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:27.055476   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:27.055513   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 14:05:27.440237   13860 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 14:05:27.444625   13860 out.go:177] * Enabled addons: storage-provisioner
	I0327 14:05:27.452490   13860 addons.go:505] duration metric: took 30.501221708s for enable addons: enabled=[storage-provisioner]
	I0327 14:05:32.056256   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:32.056274   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:37.057004   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:37.057044   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:42.058212   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:42.058289   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:47.059924   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:47.059947   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:52.061748   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:52.061823   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:57.064156   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:57.064270   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:57.099840   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:05:57.099913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:57.115270   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:05:57.115344   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:57.126181   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:05:57.126251   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:57.136670   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:05:57.136742   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:57.148872   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:05:57.148933   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:57.159768   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:05:57.159839   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:57.170277   13860 logs.go:276] 0 containers: []
	W0327 14:05:57.170290   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:57.170348   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:57.180742   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:05:57.180757   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:57.180764   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:57.185864   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:57.185873   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:57.221863   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:05:57.221875   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:05:57.238259   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:05:57.238270   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:05:57.249953   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:05:57.249970   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:05:57.263162   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:57.263171   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:57.287467   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:57.287474   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:57.321725   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:05:57.321734   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:05:57.333601   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:05:57.333612   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:05:57.348172   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:05:57.348180   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:05:57.364930   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:05:57.364940   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:05:57.378984   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:05:57.378997   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:57.390314   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:05:57.390326   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:05:59.906559   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:04.909119   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:04.909347   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:04.929739   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:04.929837   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:04.945179   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:04.945258   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:04.957912   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:04.957988   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:04.975657   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:04.975729   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:04.985818   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:04.985885   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:04.996881   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:04.996949   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:05.009813   13860 logs.go:276] 0 containers: []
	W0327 14:06:05.009825   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:05.009884   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:05.020186   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:05.020201   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:05.020207   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:05.035507   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:05.035520   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:05.047720   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:05.047729   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:05.059205   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:05.059219   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:05.093460   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:05.093467   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:05.130431   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:05.130442   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:05.144897   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:05.144907   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:05.156402   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:05.156412   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:05.180448   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:05.180457   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:05.192260   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:05.192270   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:05.197026   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:05.197032   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:05.210841   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:05.210852   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:05.221920   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:05.221931   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:07.740361   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:12.742592   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:12.742790   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:12.760647   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:12.760746   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:12.774907   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:12.774980   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:12.786611   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:12.786684   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:12.797385   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:12.797449   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:12.807925   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:12.807994   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:12.818810   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:12.818877   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:12.829465   13860 logs.go:276] 0 containers: []
	W0327 14:06:12.829475   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:12.829533   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:12.839803   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:12.839819   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:12.839825   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:12.853875   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:12.853885   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:12.865261   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:12.865273   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:12.877084   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:12.877094   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:12.894103   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:12.894112   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:12.905227   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:12.905237   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:12.928931   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:12.928941   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:12.940298   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:12.940311   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:12.976297   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:12.976308   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:12.981303   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:12.981311   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:13.002506   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:13.002517   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:13.014471   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:13.014488   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:13.029767   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:13.029778   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:15.566052   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:20.568305   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:20.568524   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:20.590602   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:20.590709   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:20.605584   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:20.605672   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:20.618686   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:20.618753   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:20.629800   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:20.629870   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:20.640568   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:20.640641   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:20.650736   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:20.650800   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:20.660749   13860 logs.go:276] 0 containers: []
	W0327 14:06:20.660764   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:20.660825   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:20.671552   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:20.671567   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:20.671576   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:20.682751   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:20.682765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:20.707233   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:20.707241   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:20.718800   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:20.718810   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:20.753028   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:20.753039   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:20.790540   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:20.790551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:20.805370   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:20.805383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:20.816668   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:20.816679   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:20.828218   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:20.828234   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:20.833065   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:20.833072   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:20.846771   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:20.846783   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:20.860989   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:20.861001   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:20.874945   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:20.874958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:23.393154   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:28.395465   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:28.395882   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:28.448573   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:28.448701   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:28.467543   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:28.467619   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:28.480524   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:28.480622   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:28.493821   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:28.493890   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:28.504811   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:28.504887   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:28.516501   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:28.516580   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:28.531290   13860 logs.go:276] 0 containers: []
	W0327 14:06:28.531300   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:28.531352   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:28.548594   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:28.548608   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:28.548614   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:28.560143   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:28.560154   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:28.577257   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:28.577268   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:28.588503   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:28.588519   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:28.622379   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:28.622385   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:28.626512   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:28.626520   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:28.640590   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:28.640602   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:28.655597   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:28.655607   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:28.668946   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:28.668968   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:28.693938   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:28.693946   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:28.728084   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:28.728095   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:28.743717   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:28.743728   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:28.754774   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:28.754783   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:31.268614   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:36.270788   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:36.270937   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:36.282519   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:36.282596   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:36.296406   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:36.296474   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:36.311503   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:36.311582   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:36.321662   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:36.321731   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:36.331947   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:36.332018   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:36.342442   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:36.342507   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:36.355360   13860 logs.go:276] 0 containers: []
	W0327 14:06:36.355371   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:36.355427   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:36.366000   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:36.366014   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:36.366019   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:36.377755   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:36.377765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:36.411296   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:36.411304   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:36.415433   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:36.415440   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:36.449597   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:36.449610   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:36.464040   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:36.464052   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:36.479301   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:36.479314   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:36.492397   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:36.492408   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:36.507510   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:36.507522   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:36.519015   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:36.519027   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:36.530691   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:36.530701   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:36.545374   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:36.545383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:36.563015   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:36.563025   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:39.088341   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:44.091040   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:44.091456   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:44.135325   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:44.135468   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:44.155818   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:44.155915   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:44.175717   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:44.175794   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:44.189643   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:44.189716   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:44.200897   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:44.200970   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:44.211610   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:44.211688   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:44.221788   13860 logs.go:276] 0 containers: []
	W0327 14:06:44.221801   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:44.221855   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:44.232671   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:44.232685   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:44.232689   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:44.244368   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:44.244378   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:44.256575   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:44.256589   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:44.271817   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:44.271827   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:44.283318   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:44.283332   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:44.307763   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:44.307770   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:44.341880   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:44.341889   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:44.346390   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:44.346396   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:44.360665   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:44.360675   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:44.372458   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:44.372467   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:44.389802   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:44.389811   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:44.436155   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:44.436166   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:44.451703   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:44.451716   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:46.969663   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:51.971923   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:51.972069   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:51.988942   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:51.989027   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:52.009062   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:52.009136   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:52.019486   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:52.019551   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:52.029779   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:52.029850   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:52.041148   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:52.041221   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:52.051832   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:52.051903   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:52.061655   13860 logs.go:276] 0 containers: []
	W0327 14:06:52.061665   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:52.061722   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:52.072271   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:52.072287   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:52.072294   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:52.084464   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:52.084474   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:52.101774   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:52.101785   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:52.115744   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:52.115757   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:52.126940   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:52.126951   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:52.161958   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:52.161969   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:52.201418   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:52.201432   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:52.213144   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:52.213157   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:52.224756   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:52.224766   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:52.239602   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:52.239614   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:52.264024   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:52.264031   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:52.268406   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:52.268414   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:52.286552   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:52.286563   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:54.802476   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:59.804656   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:59.804796   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:59.818322   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:59.818406   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:59.831505   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:59.831573   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:59.842472   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:59.842545   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:59.853152   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:59.853216   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:59.864344   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:59.864413   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:59.874430   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:59.874502   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:59.884843   13860 logs.go:276] 0 containers: []
	W0327 14:06:59.884853   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:59.884913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:59.895106   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:59.895123   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:59.895128   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:59.912496   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:59.912505   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:59.924381   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:59.924392   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:59.939098   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:59.939108   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:59.957232   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:59.957243   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:59.992010   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:59.992021   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:59.996479   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:59.996484   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:00.032178   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:00.032188   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:00.046836   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:00.046846   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:00.058207   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:00.058218   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:00.073003   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:00.073017   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:00.084338   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:00.084349   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:00.107955   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:00.107963   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:02.624741   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:07.625976   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:07.626162   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:07.643872   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:07.643962   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:07.658820   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:07.658897   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:07.671962   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:07:07.672032   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:07.683278   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:07.683348   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:07.693889   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:07.693957   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:07.704504   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:07.704575   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:07.714048   13860 logs.go:276] 0 containers: []
	W0327 14:07:07.714058   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:07.714115   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:07.725912   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:07.725928   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:07.725934   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:07.743055   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:07.743067   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:07.766364   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:07.766371   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:07.777369   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:07.777382   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:07.792095   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:07.792110   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:07.797033   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:07.797040   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:07.831258   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:07.831272   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:07.845097   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:07.845110   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:07.856973   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:07.856985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:07.868430   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:07.868442   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:07.882617   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:07.882628   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:07.895354   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:07.895364   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:07.928473   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:07.928482   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:10.455721   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:15.458102   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:15.458309   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:15.476681   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:15.476769   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:15.490604   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:15.490669   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:15.503597   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:15.503671   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:15.514277   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:15.514339   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:15.525178   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:15.525245   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:15.539548   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:15.539618   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:15.549657   13860 logs.go:276] 0 containers: []
	W0327 14:07:15.549675   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:15.549737   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:15.559804   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:15.559822   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:15.559828   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:15.564566   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:15.564571   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:15.578662   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:15.578672   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:15.590653   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:15.590663   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:15.602107   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:15.602118   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:15.619403   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:15.619413   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:15.633769   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:15.633779   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:15.645185   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:15.645195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:15.664032   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:15.664043   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:15.676476   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:15.676486   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:15.712579   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:15.712591   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:15.724635   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:15.724647   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:15.736740   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:15.736750   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:15.771429   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:15.771445   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:15.783219   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:15.783229   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:18.308708   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:23.310984   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:23.311179   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:23.327457   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:23.327545   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:23.340052   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:23.340120   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:23.350909   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:23.350984   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:23.361607   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:23.361671   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:23.372666   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:23.372732   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:23.383242   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:23.383309   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:23.393492   13860 logs.go:276] 0 containers: []
	W0327 14:07:23.393503   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:23.393560   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:23.403588   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:23.403605   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:23.403611   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:23.415184   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:23.415195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:23.433118   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:23.433129   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:23.447812   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:23.447822   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:23.461725   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:23.461734   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:23.472809   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:23.472821   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:23.489988   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:23.490000   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:23.503601   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:23.503613   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:23.516552   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:23.516563   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:23.541047   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:23.541058   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:23.553378   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:23.553388   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:23.587152   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:23.587160   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:23.591331   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:23.591337   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:23.625673   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:23.625686   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:23.641044   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:23.641055   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:26.154931   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:31.157193   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:31.157357   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:31.172632   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:31.172715   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:31.184905   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:31.184973   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:31.195815   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:31.195896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:31.206167   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:31.206228   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:31.216646   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:31.216705   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:31.226836   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:31.226908   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:31.237022   13860 logs.go:276] 0 containers: []
	W0327 14:07:31.237034   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:31.237095   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:31.247977   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:31.247991   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:31.247997   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:31.271343   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:31.271353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:31.284212   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:31.284221   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:31.295564   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:31.295575   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:31.310567   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:31.310580   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:31.328743   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:31.328752   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:31.340062   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:31.340075   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:31.353708   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:31.353721   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:31.365106   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:31.365117   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:31.379223   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:31.379231   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:31.383544   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:31.383554   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:31.418049   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:31.418061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:31.430011   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:31.430022   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:31.441874   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:31.441885   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:31.453604   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:31.453613   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:33.989429   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:38.991657   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:38.991792   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:39.006056   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:39.006118   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:39.017827   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:39.017896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:39.028120   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:39.028187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:39.038798   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:39.038863   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:39.052003   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:39.052070   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:39.062852   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:39.062913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:39.073303   13860 logs.go:276] 0 containers: []
	W0327 14:07:39.073316   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:39.073370   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:39.084362   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:39.084380   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:39.084386   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:39.096058   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:39.096069   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:39.135305   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:39.135318   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:39.148102   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:39.148116   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:39.160307   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:39.160318   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:39.172252   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:39.172260   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:39.206225   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:39.206233   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:39.220743   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:39.220753   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:39.225702   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:39.225708   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:39.237654   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:39.237664   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:39.252340   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:39.252350   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:39.270151   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:39.270160   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:39.294777   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:39.294784   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:39.306084   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:39.306097   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:39.320162   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:39.320171   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:41.833322   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:46.833538   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:46.833700   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:46.849242   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:46.849326   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:46.861427   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:46.861490   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:46.879339   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:46.879421   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:46.891486   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:46.891557   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:46.901877   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:46.901944   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:46.916166   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:46.916231   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:46.926323   13860 logs.go:276] 0 containers: []
	W0327 14:07:46.926335   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:46.926398   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:46.940516   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:46.940536   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:46.940542   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:46.953910   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:46.953922   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:46.968248   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:46.968259   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:46.985379   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:46.985391   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:46.997165   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:46.997175   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:47.021038   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:47.021047   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:47.025827   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:47.025836   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:47.061473   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:47.061487   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:47.076572   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:47.076583   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:47.110833   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:47.110842   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:47.123015   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:47.123026   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:47.135540   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:47.135551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:47.147520   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:47.147531   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:47.167594   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:47.167604   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:47.186149   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:47.186162   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:49.703178   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:54.705462   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:54.705579   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:54.716973   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:54.717041   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:54.727981   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:54.728045   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:54.740842   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:54.740920   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:54.753171   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:54.753253   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:54.764650   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:54.764724   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:54.776802   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:54.776870   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:54.788145   13860 logs.go:276] 0 containers: []
	W0327 14:07:54.788156   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:54.788212   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:54.799327   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:54.799347   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:54.799353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:54.819522   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:54.819539   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:54.832146   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:54.832158   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:54.857196   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:54.857221   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:54.904738   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:54.904753   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:54.936121   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:54.936133   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:54.951035   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:54.951046   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:54.965997   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:54.966008   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:54.978213   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:54.978225   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:54.993650   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:54.993660   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:54.998456   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:54.998463   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:55.014137   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:55.014149   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:55.033568   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:55.033581   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:55.047568   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:55.047583   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:55.085119   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:55.085135   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:57.600270   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:02.602412   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:02.602569   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:02.613095   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:02.613163   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:02.627087   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:02.627158   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:02.637704   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:02.637776   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:02.655534   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:02.655608   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:02.666366   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:02.666434   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:02.677390   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:02.677457   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:02.688766   13860 logs.go:276] 0 containers: []
	W0327 14:08:02.688778   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:02.688846   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:02.705266   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:02.705285   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:02.705290   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:02.717020   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:02.717031   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:02.729372   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:02.729382   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:02.750019   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:02.750033   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:02.764258   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:02.764269   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:02.779851   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:02.779862   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:02.814289   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:02.814300   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:02.849054   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:02.849064   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:02.863288   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:02.863298   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:02.875338   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:02.875349   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:02.887055   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:02.887065   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:02.898800   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:02.898812   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:02.921757   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:02.921765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:02.926320   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:02.926330   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:02.938785   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:02.938797   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:05.461441   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:10.463716   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:10.463923   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:10.486627   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:10.486748   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:10.501847   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:10.501929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:10.514929   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:10.514997   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:10.526023   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:10.526089   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:10.536751   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:10.536826   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:10.548106   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:10.548174   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:10.558197   13860 logs.go:276] 0 containers: []
	W0327 14:08:10.558210   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:10.558265   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:10.568406   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:10.568421   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:10.568427   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:10.573195   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:10.573204   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:10.588246   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:10.588255   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:10.600083   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:10.600097   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:10.611336   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:10.611346   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:10.623178   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:10.623189   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:10.634944   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:10.634956   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:10.670116   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:10.670127   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:10.684148   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:10.684159   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:10.701672   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:10.701683   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:10.739430   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:10.739440   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:10.754565   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:10.754575   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:10.769305   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:10.769315   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:10.781061   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:10.781072   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:10.797551   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:10.797563   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:13.329216   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:18.331366   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:18.331460   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:18.343332   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:18.343432   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:18.357533   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:18.357649   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:18.369724   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:18.369812   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:18.381119   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:18.381193   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:18.392511   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:18.392583   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:18.403826   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:18.403899   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:18.414786   13860 logs.go:276] 0 containers: []
	W0327 14:08:18.414796   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:18.414855   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:18.426308   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:18.426330   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:18.426335   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:18.430743   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:18.430749   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:18.444945   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:18.444958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:18.458586   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:18.458595   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:18.473289   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:18.473301   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:18.485224   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:18.485234   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:18.524972   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:18.524985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:18.537156   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:18.537168   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:18.552179   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:18.552191   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:18.576669   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:18.576676   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:18.588754   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:18.588766   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:18.624132   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:18.624139   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:18.635390   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:18.635399   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:18.646546   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:18.646558   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:18.658670   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:18.658683   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:21.177829   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:26.180042   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:26.180211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:26.195292   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:26.195362   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:26.205946   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:26.206020   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:26.217228   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:26.217294   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:26.228674   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:26.228756   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:26.243628   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:26.243707   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:26.254321   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:26.254383   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:26.264980   13860 logs.go:276] 0 containers: []
	W0327 14:08:26.264991   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:26.265062   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:26.275537   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:26.275556   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:26.275562   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:26.286694   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:26.286704   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:26.301495   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:26.301506   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:26.336363   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:26.336375   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:26.371555   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:26.371568   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:26.383384   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:26.383396   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:26.394670   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:26.394680   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:26.398922   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:26.398929   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:26.411005   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:26.411018   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:26.425592   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:26.425604   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:26.438001   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:26.438011   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:26.449881   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:26.449890   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:26.471544   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:26.471554   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:26.483764   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:26.483774   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:26.506614   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:26.506623   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:29.022723   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:34.024983   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:34.025211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:34.046460   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:34.046559   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:34.061802   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:34.061867   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:34.073835   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:34.073907   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:34.084682   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:34.084740   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:34.095125   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:34.095194   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:34.105703   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:34.105766   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:34.118506   13860 logs.go:276] 0 containers: []
	W0327 14:08:34.118517   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:34.118570   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:34.129233   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:34.129252   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:34.129258   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:34.134209   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:34.134217   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:34.146407   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:34.146418   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:34.174773   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:34.174791   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:34.202135   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:34.202150   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:34.214053   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:34.214063   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:34.228439   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:34.228452   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:34.239660   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:34.239670   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:34.255492   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:34.255505   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:34.267457   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:34.267468   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:34.282671   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:34.282683   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:34.316972   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:34.316981   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:34.331438   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:34.331449   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:34.343172   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:34.343181   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:34.377608   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:34.377619   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:36.897682   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:41.899876   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:41.899988   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:41.912818   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:41.912896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:41.923854   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:41.923923   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:41.935912   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:41.935985   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:41.945690   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:41.945761   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:41.956044   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:41.956112   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:41.966718   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:41.966782   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:41.976215   13860 logs.go:276] 0 containers: []
	W0327 14:08:41.976225   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:41.976285   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:41.986426   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:41.986441   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:41.986447   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:42.001216   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:42.001226   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:42.013049   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:42.013060   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:42.027857   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:42.027871   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:42.038957   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:42.038970   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:42.062965   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:42.062976   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:42.097231   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:42.097241   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:42.109378   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:42.109387   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:42.122786   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:42.122798   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:42.159273   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:42.159287   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:42.176910   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:42.176922   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:42.188861   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:42.188869   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:42.201680   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:42.201691   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:42.215503   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:42.215525   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:42.226914   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:42.226927   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:44.733355   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:49.735713   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:49.735947   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:49.755379   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:49.755472   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:49.769987   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:49.770060   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:49.782282   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:49.782359   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:49.792628   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:49.792698   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:49.803389   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:49.803461   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:49.814133   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:49.814201   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:49.824651   13860 logs.go:276] 0 containers: []
	W0327 14:08:49.824663   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:49.824725   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:49.835472   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:49.835488   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:49.835493   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:49.869774   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:49.869798   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:49.884050   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:49.884067   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:49.897229   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:49.897243   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:49.919684   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:49.919692   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:49.958529   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:49.958540   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:49.970632   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:49.970645   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:49.983710   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:49.983722   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:49.987969   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:49.987975   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:50.002187   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:50.002199   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:50.016748   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:50.016758   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:50.037386   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:50.037395   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:50.051176   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:50.051187   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:50.063451   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:50.063462   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:50.075648   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:50.075658   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:52.589484   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:57.591677   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:57.596114   13860 out.go:177] 
	W0327 14:08:57.599070   13860 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 14:08:57.599086   13860 out.go:239] * 
	* 
	W0327 14:08:57.600244   13860 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:08:57.616070   13860 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-823000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-27 14:08:57.707574 -0700 PDT m=+1413.225015793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-823000 -n running-upgrade-823000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-823000 -n running-upgrade-823000: exit status 2 (15.730690125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-823000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-965000          | force-systemd-flag-965000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-694000              | force-systemd-env-694000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-694000           | force-systemd-env-694000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT | 27 Mar 24 13:58 PDT |
	| start   | -p docker-flags-637000                | docker-flags-637000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-965000             | force-systemd-flag-965000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-965000          | force-systemd-flag-965000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT | 27 Mar 24 13:58 PDT |
	| start   | -p cert-expiration-514000             | cert-expiration-514000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-637000 ssh               | docker-flags-637000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-637000 ssh               | docker-flags-637000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-637000                | docker-flags-637000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT | 27 Mar 24 13:58 PDT |
	| start   | -p cert-options-468000                | cert-options-468000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-468000 ssh               | cert-options-468000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-468000 -- sudo        | cert-options-468000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-468000                | cert-options-468000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:58 PDT | 27 Mar 24 13:58 PDT |
	| start   | -p running-upgrade-823000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 13:59 PDT | 27 Mar 24 14:00 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-823000             | running-upgrade-823000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:00 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-514000             | cert-expiration-514000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:01 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-514000             | cert-expiration-514000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:01 PDT | 27 Mar 24 14:01 PDT |
	| start   | -p kubernetes-upgrade-524000          | kubernetes-upgrade-524000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:01 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-524000          | kubernetes-upgrade-524000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:02 PDT | 27 Mar 24 14:02 PDT |
	| start   | -p kubernetes-upgrade-524000          | kubernetes-upgrade-524000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:02 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0   |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-524000          | kubernetes-upgrade-524000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:02 PDT | 27 Mar 24 14:02 PDT |
	| start   | -p stopped-upgrade-077000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 14:02 PDT | 27 Mar 24 14:03 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-077000 stop           | minikube                  | jenkins | v1.26.0        | 27 Mar 24 14:03 PDT | 27 Mar 24 14:03 PDT |
	| start   | -p stopped-upgrade-077000             | stopped-upgrade-077000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 14:03 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 14:03:18
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 14:03:18.700900   14042 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:03:18.701048   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:03:18.701052   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:03:18.701054   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:03:18.701222   14042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:03:18.702380   14042 out.go:298] Setting JSON to false
	I0327 14:03:18.721182   14042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7368,"bootTime":1711566030,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:03:18.721253   14042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:03:18.725068   14042 out.go:177] * [stopped-upgrade-077000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:03:18.733894   14042 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:03:18.735480   14042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:03:18.733929   14042 notify.go:220] Checking for updates...
	I0327 14:03:18.738858   14042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:03:18.741931   14042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:03:18.744908   14042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:03:18.747857   14042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:03:18.751207   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:03:18.754876   14042 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 14:03:18.757828   14042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:03:18.761837   14042 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:03:18.768845   14042 start.go:297] selected driver: qemu2
	I0327 14:03:18.768853   14042 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:18.768899   14042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:03:18.771501   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:03:18.771519   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:03:18.771548   14042 start.go:340] cluster config:
	{Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:18.771609   14042 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:03:18.778704   14042 out.go:177] * Starting "stopped-upgrade-077000" primary control-plane node in "stopped-upgrade-077000" cluster
	I0327 14:03:18.782839   14042 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:03:18.782853   14042 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 14:03:18.782859   14042 cache.go:56] Caching tarball of preloaded images
	I0327 14:03:18.782905   14042 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:03:18.782911   14042 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 14:03:18.782954   14042 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/config.json ...
	I0327 14:03:18.783339   14042 start.go:360] acquireMachinesLock for stopped-upgrade-077000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:03:18.783368   14042 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "stopped-upgrade-077000"
	I0327 14:03:18.783379   14042 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:03:18.783383   14042 fix.go:54] fixHost starting: 
	I0327 14:03:18.783497   14042 fix.go:112] recreateIfNeeded on stopped-upgrade-077000: state=Stopped err=<nil>
	W0327 14:03:18.783506   14042 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:03:18.790845   14042 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-077000" ...
	I0327 14:03:21.856029   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:21.856633   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:21.894010   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:21.894139   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:21.914758   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:21.914889   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:21.929626   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:21.929698   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:21.941758   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:21.941838   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:21.953254   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:21.953322   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:21.963935   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:21.964035   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:21.974726   13860 logs.go:276] 0 containers: []
	W0327 14:03:21.974736   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:21.974797   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:21.984995   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:21.985018   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:21.985024   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:21.996707   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:21.996717   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:22.008474   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:22.008487   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:22.019871   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:22.019883   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:22.055851   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:22.055862   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:22.092813   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:22.092825   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:22.104434   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:22.104443   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:22.118257   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:22.118269   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:22.157570   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:22.157577   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:22.170956   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:22.170966   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:22.188603   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:22.188613   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:22.202282   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:22.202293   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:22.227053   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:22.227068   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:22.239513   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:22.239523   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:22.244099   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:22.244107   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:22.258942   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:22.258954   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:22.274433   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:22.274445   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:18.794936   14042 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52464-:22,hostfwd=tcp::52465-:2376,hostname=stopped-upgrade-077000 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/disk.qcow2
	I0327 14:03:18.843091   14042 main.go:141] libmachine: STDOUT: 
	I0327 14:03:18.843129   14042 main.go:141] libmachine: STDERR: 
	I0327 14:03:18.843134   14042 main.go:141] libmachine: Waiting for VM to start (ssh -p 52464 docker@127.0.0.1)...
	I0327 14:03:24.795008   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:29.797205   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:29.797320   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:29.812153   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:29.812230   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:29.823757   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:29.823823   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:29.835626   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:29.835701   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:29.846637   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:29.846707   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:29.857218   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:29.857279   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:29.867786   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:29.867851   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:29.877405   13860 logs.go:276] 0 containers: []
	W0327 14:03:29.877415   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:29.877472   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:29.888373   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:29.888390   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:29.888395   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:29.927030   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:29.927041   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:29.967409   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:29.967432   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:29.981270   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:29.981286   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:30.000588   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:30.000605   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:30.017454   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:30.017471   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:30.030690   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:30.030705   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:30.075054   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:30.075076   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:30.091199   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:30.091216   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:30.104348   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:30.104361   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:30.119387   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:30.119400   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:30.147364   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:30.147383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:30.163174   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:30.163189   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:30.168451   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:30.168463   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:30.184202   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:30.184215   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:30.198132   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:30.198143   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:30.216936   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:30.216958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:32.733962   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:37.736500   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:37.736929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:37.770401   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:37.770526   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:37.789917   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:37.790006   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:37.804220   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:37.804298   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:37.816450   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:37.816518   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:37.827310   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:37.827379   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:37.838254   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:37.838328   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:37.848161   13860 logs.go:276] 0 containers: []
	W0327 14:03:37.848173   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:37.848226   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:37.861328   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:37.861348   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:37.861353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:37.873502   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:37.873510   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:37.889638   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:37.889651   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:37.901571   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:37.901585   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:37.942703   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:37.942710   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:37.958920   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:37.958931   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:37.975621   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:37.975634   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:37.989878   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:37.989890   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:37.994121   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:37.994129   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:38.007487   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:38.007499   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:38.021860   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:38.021873   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:38.033864   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:38.033874   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:38.073692   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:38.073863   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:38.113755   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:38.113766   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:38.125284   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:38.125294   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:38.147767   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:38.147774   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:38.159142   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:38.159152   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:40.680974   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:39.251049   14042 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/config.json ...
	I0327 14:03:39.251684   14042 machine.go:94] provisionDockerMachine start ...
	I0327 14:03:39.251849   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.252254   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.252266   14042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 14:03:39.342064   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 14:03:39.342089   14042 buildroot.go:166] provisioning hostname "stopped-upgrade-077000"
	I0327 14:03:39.342156   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.342359   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.342368   14042 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-077000 && echo "stopped-upgrade-077000" | sudo tee /etc/hostname
	I0327 14:03:39.424644   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-077000
	
	I0327 14:03:39.424706   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.424844   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.424856   14042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-077000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-077000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-077000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 14:03:39.501918   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 14:03:39.501933   14042 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18158-11341/.minikube CaCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18158-11341/.minikube}
	I0327 14:03:39.501944   14042 buildroot.go:174] setting up certificates
	I0327 14:03:39.501955   14042 provision.go:84] configureAuth start
	I0327 14:03:39.501965   14042 provision.go:143] copyHostCerts
	I0327 14:03:39.502084   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem, removing ...
	I0327 14:03:39.502093   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem
	I0327 14:03:39.502255   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem (1123 bytes)
	I0327 14:03:39.502543   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem, removing ...
	I0327 14:03:39.502548   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem
	I0327 14:03:39.502645   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem (1675 bytes)
	I0327 14:03:39.502846   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem, removing ...
	I0327 14:03:39.502851   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem
	I0327 14:03:39.502933   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem (1078 bytes)
	I0327 14:03:39.503061   14042 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-077000 san=[127.0.0.1 localhost minikube stopped-upgrade-077000]
	I0327 14:03:39.606406   14042 provision.go:177] copyRemoteCerts
	I0327 14:03:39.606453   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 14:03:39.606462   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:39.644033   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 14:03:39.651207   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 14:03:39.658128   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 14:03:39.664557   14042 provision.go:87] duration metric: took 162.596125ms to configureAuth
	I0327 14:03:39.664566   14042 buildroot.go:189] setting minikube options for container-runtime
	I0327 14:03:39.664658   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:03:39.664694   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.664781   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.664786   14042 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 14:03:39.732730   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 14:03:39.732738   14042 buildroot.go:70] root file system type: tmpfs
	I0327 14:03:39.732798   14042 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 14:03:39.732845   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.732949   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.732984   14042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 14:03:39.808429   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 14:03:39.808482   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.808600   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.808611   14042 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 14:03:40.171752   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 14:03:40.171766   14042 machine.go:97] duration metric: took 920.086083ms to provisionDockerMachine
	I0327 14:03:40.171772   14042 start.go:293] postStartSetup for "stopped-upgrade-077000" (driver="qemu2")
	I0327 14:03:40.171779   14042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 14:03:40.171849   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 14:03:40.171859   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:40.208203   14042 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 14:03:40.209486   14042 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 14:03:40.209492   14042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/addons for local assets ...
	I0327 14:03:40.209571   14042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/files for local assets ...
	I0327 14:03:40.209686   14042 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem -> 117522.pem in /etc/ssl/certs
	I0327 14:03:40.209816   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 14:03:40.212356   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:03:40.219427   14042 start.go:296] duration metric: took 47.650459ms for postStartSetup
	I0327 14:03:40.219441   14042 fix.go:56] duration metric: took 21.436376125s for fixHost
	I0327 14:03:40.219474   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:40.219577   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:40.219582   14042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 14:03:40.286254   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711573420.354892254
	
	I0327 14:03:40.286261   14042 fix.go:216] guest clock: 1711573420.354892254
	I0327 14:03:40.286265   14042 fix.go:229] Guest: 2024-03-27 14:03:40.354892254 -0700 PDT Remote: 2024-03-27 14:03:40.219443 -0700 PDT m=+21.552879251 (delta=135.449254ms)
	I0327 14:03:40.286277   14042 fix.go:200] guest clock delta is within tolerance: 135.449254ms
	I0327 14:03:40.286280   14042 start.go:83] releasing machines lock for "stopped-upgrade-077000", held for 21.503226083s
	I0327 14:03:40.286345   14042 ssh_runner.go:195] Run: cat /version.json
	I0327 14:03:40.286346   14042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 14:03:40.286354   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:40.286365   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	W0327 14:03:40.286950   14042 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52464: connect: connection refused
	I0327 14:03:40.286981   14042 retry.go:31] will retry after 348.367625ms: dial tcp [::1]:52464: connect: connection refused
	W0327 14:03:40.693144   14042 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 14:03:40.693268   14042 ssh_runner.go:195] Run: systemctl --version
	I0327 14:03:40.696781   14042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 14:03:40.700120   14042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 14:03:40.700170   14042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 14:03:40.705493   14042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 14:03:40.712620   14042 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 14:03:40.712630   14042 start.go:494] detecting cgroup driver to use...
	I0327 14:03:40.712721   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:03:40.722690   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 14:03:40.727062   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 14:03:40.730942   14042 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 14:03:40.730974   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 14:03:40.734588   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:03:40.737789   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 14:03:40.740808   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:03:40.744220   14042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 14:03:40.747809   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 14:03:40.751337   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 14:03:40.754522   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 14:03:40.757363   14042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 14:03:40.760288   14042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 14:03:40.763565   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:40.838581   14042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 14:03:40.844370   14042 start.go:494] detecting cgroup driver to use...
	I0327 14:03:40.844435   14042 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 14:03:40.849985   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:03:40.854563   14042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 14:03:40.863713   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:03:40.868725   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 14:03:40.873623   14042 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 14:03:40.931859   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 14:03:40.938691   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:03:40.945525   14042 ssh_runner.go:195] Run: which cri-dockerd
	I0327 14:03:40.946848   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 14:03:40.949448   14042 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 14:03:40.954367   14042 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 14:03:41.033187   14042 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 14:03:41.119862   14042 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 14:03:41.119930   14042 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 14:03:41.125337   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:41.202511   14042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:03:42.365150   14042 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162638125s)
	I0327 14:03:42.365206   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 14:03:42.369664   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:03:42.373794   14042 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 14:03:42.448714   14042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 14:03:42.533753   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:42.613889   14042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 14:03:42.619853   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:03:42.624676   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:42.706939   14042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 14:03:42.745123   14042 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 14:03:42.745200   14042 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 14:03:42.747373   14042 start.go:562] Will wait 60s for crictl version
	I0327 14:03:42.747419   14042 ssh_runner.go:195] Run: which crictl
	I0327 14:03:42.748838   14042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 14:03:42.764368   14042 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 14:03:42.764440   14042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:03:42.783608   14042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:03:42.804056   14042 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 14:03:42.804119   14042 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 14:03:42.805501   14042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 14:03:42.809007   14042 kubeadm.go:877] updating cluster {Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 14:03:42.809057   14042 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:03:42.809097   14042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:03:42.819682   14042 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:03:42.819691   14042 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:03:42.819738   14042 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:03:42.823486   14042 ssh_runner.go:195] Run: which lz4
	I0327 14:03:42.824732   14042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 14:03:42.826108   14042 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 14:03:42.826120   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 14:03:43.528704   14042 docker.go:649] duration metric: took 704.008334ms to copy over tarball
	I0327 14:03:43.528766   14042 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 14:03:45.683288   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:45.683423   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:45.695607   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:45.695680   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:45.706414   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:45.706489   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:45.717110   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:45.717187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:45.727986   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:45.728056   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:45.745343   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:45.745410   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:45.756055   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:45.756127   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:45.766075   13860 logs.go:276] 0 containers: []
	W0327 14:03:45.766086   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:45.766146   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:45.776718   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:45.776738   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:45.776743   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:45.812162   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:45.812176   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:45.832282   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:45.832295   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:45.845190   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:45.845201   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:45.864240   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:45.864254   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:45.890016   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:45.890031   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:45.904327   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:45.904340   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:45.921996   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:45.922010   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:45.933759   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:45.933771   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:45.950339   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:45.950352   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:45.962263   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:45.962276   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:45.973707   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:45.973718   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:46.015889   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:46.015903   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:46.030520   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:46.030532   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:46.042150   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:46.042162   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:46.046303   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:46.046309   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:46.085357   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:46.085367   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:44.712705   14042 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.183932167s)
	I0327 14:03:44.712718   14042 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 14:03:44.728725   14042 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:03:44.732031   14042 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 14:03:44.737320   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:44.815490   14042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:03:46.919772   14042 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.104295167s)
	I0327 14:03:46.919864   14042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:03:46.933273   14042 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:03:46.933283   14042 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:03:46.933288   14042 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 14:03:46.944102   14042 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:46.944103   14042 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:46.944162   14042 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:46.944215   14042 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:46.944360   14042 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:46.944384   14042 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:46.944414   14042 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:46.944422   14042 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 14:03:46.952444   14042 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:46.952515   14042 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 14:03:46.952584   14042 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:46.952594   14042 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:46.952599   14042 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:46.952657   14042 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:46.952696   14042 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:46.952770   14042 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:48.599424   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:49.418847   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.420059   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.422498   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.423475   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0327 14:03:49.425267   14042 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 14:03:49.425426   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.425523   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 14:03:49.433602   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.461293   14042 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 14:03:49.461326   14042 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.461384   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.463129   14042 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 14:03:49.463143   14042 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.463176   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.471939   14042 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 14:03:49.471958   14042 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.472041   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.481404   14042 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 14:03:49.481425   14042 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:49.481495   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:49.494983   14042 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 14:03:49.495002   14042 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.495054   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.500334   14042 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 14:03:49.500353   14042 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 14:03:49.500398   14042 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 14:03:49.500412   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 14:03:49.500412   14042 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.500436   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.513449   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 14:03:49.513487   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 14:03:49.513494   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 14:03:49.520374   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 14:03:49.523099   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 14:03:49.523202   14042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:03:49.528977   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 14:03:49.529002   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 14:03:49.529014   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 14:03:49.529052   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 14:03:49.529077   14042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0327 14:03:49.530848   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 14:03:49.530860   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 14:03:49.556571   14042 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 14:03:49.556587   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 14:03:49.596033   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0327 14:03:49.596071   14042 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:03:49.596082   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 14:03:49.631077   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0327 14:03:49.940619   14042 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 14:03:49.941157   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:49.988005   14042 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 14:03:49.988045   14042 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:49.988145   14042 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:50.012201   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 14:03:50.012337   14042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:03:50.014259   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 14:03:50.014275   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 14:03:50.042416   14042 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:03:50.042430   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 14:03:50.278697   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 14:03:50.278734   14042 cache_images.go:92] duration metric: took 3.345487625s to LoadCachedImages
	W0327 14:03:50.278774   14042 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0327 14:03:50.278782   14042 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 14:03:50.278832   14042 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-077000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 14:03:50.278902   14042 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 14:03:50.292826   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:03:50.292837   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:03:50.292841   14042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 14:03:50.292850   14042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-077000 NodeName:stopped-upgrade-077000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 14:03:50.292915   14042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-077000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 14:03:50.292964   14042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 14:03:50.296219   14042 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 14:03:50.296248   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 14:03:50.299324   14042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 14:03:50.304227   14042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 14:03:50.309353   14042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 14:03:50.314234   14042 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 14:03:50.315452   14042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 14:03:50.319498   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:50.394731   14042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:03:50.400151   14042 certs.go:68] Setting up /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000 for IP: 10.0.2.15
	I0327 14:03:50.400158   14042 certs.go:194] generating shared ca certs ...
	I0327 14:03:50.400166   14042 certs.go:226] acquiring lock for ca certs: {Name:mkbfc84e619c8d37a470429cb64ebb1efb05c6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.400326   14042 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key
	I0327 14:03:50.401071   14042 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key
	I0327 14:03:50.401077   14042 certs.go:256] generating profile certs ...
	I0327 14:03:50.401364   14042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key
	I0327 14:03:50.401390   14042 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0
	I0327 14:03:50.401402   14042 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 14:03:50.619002   14042 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 ...
	I0327 14:03:50.619018   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0: {Name:mk46c4c69cec8e14adc115c5f9a746ac9de77e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.619331   14042 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0 ...
	I0327 14:03:50.619336   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0: {Name:mka1aecbeaeb70a08aae7fc5ff07a1d2988378fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.619493   14042 certs.go:381] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt
	I0327 14:03:50.619632   14042 certs.go:385] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key
	I0327 14:03:50.621315   14042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.key
	I0327 14:03:50.621490   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem (1338 bytes)
	W0327 14:03:50.621685   14042 certs.go:480] ignoring /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752_empty.pem, impossibly tiny 0 bytes
	I0327 14:03:50.621693   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 14:03:50.621718   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem (1078 bytes)
	I0327 14:03:50.621737   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem (1123 bytes)
	I0327 14:03:50.621755   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem (1675 bytes)
	I0327 14:03:50.621797   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:03:50.622112   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 14:03:50.629498   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 14:03:50.636308   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 14:03:50.643642   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 14:03:50.651123   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 14:03:50.657752   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 14:03:50.664082   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 14:03:50.670499   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 14:03:50.677238   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /usr/share/ca-certificates/117522.pem (1708 bytes)
	I0327 14:03:50.683484   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 14:03:50.690271   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem --> /usr/share/ca-certificates/11752.pem (1338 bytes)
	I0327 14:03:50.697033   14042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 14:03:50.702282   14042 ssh_runner.go:195] Run: openssl version
	I0327 14:03:50.704104   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117522.pem && ln -fs /usr/share/ca-certificates/117522.pem /etc/ssl/certs/117522.pem"
	I0327 14:03:50.707141   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.708694   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 20:47 /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.708710   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.710486   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117522.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 14:03:50.713350   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 14:03:50.716334   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.717868   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 21:00 /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.717886   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.719633   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 14:03:50.722696   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11752.pem && ln -fs /usr/share/ca-certificates/11752.pem /etc/ssl/certs/11752.pem"
	I0327 14:03:50.725519   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.726826   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 20:47 /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.726841   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.728574   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11752.pem /etc/ssl/certs/51391683.0"
	I0327 14:03:50.731989   14042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 14:03:50.733560   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 14:03:50.735642   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 14:03:50.737594   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 14:03:50.739600   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 14:03:50.741615   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 14:03:50.743398   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 14:03:50.745442   14042 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:50.745503   14042 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:03:50.755805   14042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 14:03:50.759124   14042 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 14:03:50.759131   14042 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 14:03:50.759133   14042 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 14:03:50.759155   14042 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 14:03:50.762041   14042 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:03:50.762332   14042 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-077000" does not appear in /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:03:50.762430   14042 kubeconfig.go:62] /Users/jenkins/minikube-integration/18158-11341/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-077000" cluster setting kubeconfig missing "stopped-upgrade-077000" context setting]
	I0327 14:03:50.762611   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.763042   14042 kapi.go:59] client config for stopped-upgrade-077000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043c3020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:03:50.763497   14042 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 14:03:50.766116   14042 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-077000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 14:03:50.766123   14042 kubeadm.go:1154] stopping kube-system containers ...
	I0327 14:03:50.766165   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:03:50.777023   14042 docker.go:483] Stopping containers: [5e4db03ec227 048161dfe88e 9d8978a7a14e 7e3614c971ee 497390a10b43 a6edaca08a0a 44fb0f026eb6 6bd655ded881]
	I0327 14:03:50.777087   14042 ssh_runner.go:195] Run: docker stop 5e4db03ec227 048161dfe88e 9d8978a7a14e 7e3614c971ee 497390a10b43 a6edaca08a0a 44fb0f026eb6 6bd655ded881
	I0327 14:03:50.787915   14042 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 14:03:50.793530   14042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:03:50.796448   14042 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:03:50.796460   14042 kubeadm.go:156] found existing configuration files:
	
	I0327 14:03:50.796483   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf
	I0327 14:03:50.799328   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:03:50.799352   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:03:50.802037   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf
	I0327 14:03:50.804309   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:03:50.804331   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:03:50.807439   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf
	I0327 14:03:50.810187   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:03:50.810213   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:03:50.812603   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf
	I0327 14:03:50.815499   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:03:50.815520   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:03:50.818140   14042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:03:50.820811   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:50.842586   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.233965   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.353400   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.376631   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.405030   14042 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:03:51.405107   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:51.907287   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:52.407172   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:52.411687   14042 api_server.go:72] duration metric: took 1.00667075s to wait for apiserver process to appear ...
	I0327 14:03:52.411699   14042 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:03:52.411713   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:53.602005   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:53.602187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:03:53.619769   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:03:53.619859   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:03:53.632525   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:03:53.632597   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:03:53.643321   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:03:53.643393   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:03:53.653711   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:03:53.653799   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:03:53.663856   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:03:53.663929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:03:53.674842   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:03:53.674902   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:03:53.684862   13860 logs.go:276] 0 containers: []
	W0327 14:03:53.684871   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:03:53.684926   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:03:53.695511   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:03:53.695528   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:03:53.695533   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:03:53.710330   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:03:53.710342   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:03:53.723263   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:03:53.723273   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:03:53.740391   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:03:53.740401   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:03:53.759140   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:03:53.759149   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:03:53.763950   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:03:53.763958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:03:53.801179   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:03:53.801195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:03:53.812952   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:03:53.812963   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:03:53.835558   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:03:53.835565   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:03:53.874729   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:03:53.874740   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:03:53.888032   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:03:53.888045   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:03:53.898920   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:03:53.898933   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:03:53.912927   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:03:53.912939   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:03:53.924253   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:03:53.924265   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:03:53.936336   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:03:53.936346   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:03:53.948090   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:03:53.948103   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:03:53.987238   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:03:53.987249   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:03:56.502248   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:57.413832   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:57.413896   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:01.504465   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:01.504582   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:01.517247   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:01.517315   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:01.529143   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:01.529211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:01.539826   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:01.539888   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:01.550692   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:01.550754   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:01.562216   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:01.562303   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:01.573081   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:01.573152   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:01.588248   13860 logs.go:276] 0 containers: []
	W0327 14:04:01.588260   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:01.588322   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:01.599286   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:01.599328   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:01.599334   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:01.616135   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:01.616146   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:01.630793   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:01.630809   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:01.642632   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:01.642647   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:01.658771   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:01.658785   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:01.676070   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:01.676084   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:01.687897   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:01.687911   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:01.699540   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:01.699551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:01.713234   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:01.713248   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:01.733801   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:01.733811   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:01.746971   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:01.746985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:01.784461   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:01.784475   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:01.796142   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:01.796156   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:01.800448   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:01.800455   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:01.836371   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:01.836384   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:01.850279   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:01.850291   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:01.873181   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:01.873188   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:02.414187   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:02.414210   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:04.412216   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:07.414523   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:07.414587   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:09.413549   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:09.413657   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:09.424730   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:09.424798   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:09.439347   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:09.439419   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:09.450071   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:09.450139   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:09.461180   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:09.461253   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:09.474655   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:09.474731   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:09.489035   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:09.489114   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:09.500668   13860 logs.go:276] 0 containers: []
	W0327 14:04:09.500680   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:09.500749   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:09.519535   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:09.519552   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:09.519558   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:09.571573   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:09.571588   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:09.583708   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:09.583719   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:09.595730   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:09.595741   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:09.616632   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:09.616643   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:09.639304   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:09.639312   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:09.659537   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:09.659548   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:09.674185   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:09.674197   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:09.688295   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:09.688306   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:09.702411   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:09.702420   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:09.744857   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:09.744869   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:09.782067   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:09.782080   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:09.799259   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:09.799269   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:09.811535   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:09.811546   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:09.823124   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:09.823134   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:09.835421   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:09.835435   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:09.840070   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:09.840076   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:12.356767   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:12.415070   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:12.415100   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:17.359052   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:17.359457   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:17.391268   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:17.391402   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:17.409778   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:17.409877   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:17.428594   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:17.428660   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:17.439683   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:17.439758   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:17.450095   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:17.450166   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:17.460349   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:17.460423   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:17.471027   13860 logs.go:276] 0 containers: []
	W0327 14:04:17.471037   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:17.471094   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:17.481632   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:17.481654   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:17.481659   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:17.500033   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:17.500044   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:17.512280   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:17.512291   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:17.554204   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:17.554221   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:17.415641   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:17.415660   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:17.558730   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:17.559453   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:17.598678   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:17.598693   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:17.610878   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:17.610889   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:17.622849   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:17.622861   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:17.659154   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:17.659165   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:17.673194   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:17.673206   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:17.688448   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:17.688460   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:17.705385   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:17.705398   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:17.719282   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:17.719293   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:17.740838   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:17.740850   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:17.770049   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:17.770059   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:17.792840   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:17.792850   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:17.804569   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:17.804582   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:20.318975   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:22.416516   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:22.416574   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:25.321410   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:25.321753   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:25.350504   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:25.350635   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:25.368642   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:25.368729   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:25.382215   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:25.382283   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:25.393886   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:25.393965   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:25.405586   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:25.405664   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:25.416762   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:25.416834   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:25.430861   13860 logs.go:276] 0 containers: []
	W0327 14:04:25.430872   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:25.430933   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:25.441363   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:25.441378   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:25.441384   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:25.455871   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:25.455882   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:25.466805   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:25.466816   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:25.478979   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:25.478988   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:25.520674   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:25.520685   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:25.537922   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:25.537932   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:25.550249   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:25.550261   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:25.569288   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:25.569298   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:25.583919   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:25.583930   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:25.588698   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:25.588704   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:25.628815   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:25.628826   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:25.652713   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:25.652728   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:25.692783   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:25.692795   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:25.707017   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:25.707029   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:25.721549   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:25.721562   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:25.733107   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:25.733130   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:25.747868   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:25.747880   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:27.417452   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:27.417539   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:28.261736   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:32.418710   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:32.418756   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:33.263627   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:33.263998   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:33.294584   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:33.294713   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:33.312461   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:33.312558   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:33.326999   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:33.327086   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:33.339122   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:33.339196   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:33.350601   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:33.350666   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:33.360975   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:33.361041   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:33.371210   13860 logs.go:276] 0 containers: []
	W0327 14:04:33.371220   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:33.371277   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:33.381752   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:33.381769   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:33.381775   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:33.422340   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:33.422360   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:33.439259   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:33.439270   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:33.457050   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:33.457063   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:33.468939   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:33.468952   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:33.484282   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:33.484293   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:33.488877   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:33.488883   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:33.523902   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:33.523913   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:33.538484   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:33.538496   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:33.552861   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:33.552873   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:33.564364   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:33.564374   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:33.606026   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:33.606039   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:33.623482   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:33.623492   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:33.637048   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:33.637061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:33.650655   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:33.650666   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:33.662845   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:33.662858   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:33.692169   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:33.692185   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:36.220347   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:37.420554   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:37.420628   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:41.222705   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:41.222924   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:41.235189   13860 logs.go:276] 2 containers: [805a648c1afc aff5fc4dd6cf]
	I0327 14:04:41.235268   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:41.245698   13860 logs.go:276] 2 containers: [563c41da9f3d c187be62fbcb]
	I0327 14:04:41.245767   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:41.256345   13860 logs.go:276] 1 containers: [291d634f0425]
	I0327 14:04:41.256410   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:41.267317   13860 logs.go:276] 2 containers: [3be737e4358c 7f6f92d5f589]
	I0327 14:04:41.267397   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:41.277758   13860 logs.go:276] 1 containers: [e67b6e619e2c]
	I0327 14:04:41.277826   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:41.288291   13860 logs.go:276] 2 containers: [e4a7ab85a732 dbd62fe532b2]
	I0327 14:04:41.288356   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:41.298178   13860 logs.go:276] 0 containers: []
	W0327 14:04:41.298190   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:41.298254   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:41.308502   13860 logs.go:276] 2 containers: [630633d3300b 11ea3ef4298b]
	I0327 14:04:41.308519   13860 logs.go:123] Gathering logs for kube-apiserver [aff5fc4dd6cf] ...
	I0327 14:04:41.308525   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aff5fc4dd6cf"
	I0327 14:04:41.347237   13860 logs.go:123] Gathering logs for etcd [c187be62fbcb] ...
	I0327 14:04:41.347247   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c187be62fbcb"
	I0327 14:04:41.361797   13860 logs.go:123] Gathering logs for storage-provisioner [11ea3ef4298b] ...
	I0327 14:04:41.361807   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11ea3ef4298b"
	I0327 14:04:41.372859   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:41.372869   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:41.377219   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:41.377228   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:41.410357   13860 logs.go:123] Gathering logs for coredns [291d634f0425] ...
	I0327 14:04:41.410368   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 291d634f0425"
	I0327 14:04:41.421427   13860 logs.go:123] Gathering logs for kube-controller-manager [e4a7ab85a732] ...
	I0327 14:04:41.421440   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a7ab85a732"
	I0327 14:04:41.439047   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:41.439057   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:41.463291   13860 logs.go:123] Gathering logs for kube-apiserver [805a648c1afc] ...
	I0327 14:04:41.463300   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 805a648c1afc"
	I0327 14:04:41.477473   13860 logs.go:123] Gathering logs for etcd [563c41da9f3d] ...
	I0327 14:04:41.477486   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563c41da9f3d"
	I0327 14:04:41.492051   13860 logs.go:123] Gathering logs for kube-scheduler [3be737e4358c] ...
	I0327 14:04:41.492061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be737e4358c"
	I0327 14:04:41.503860   13860 logs.go:123] Gathering logs for kube-controller-manager [dbd62fe532b2] ...
	I0327 14:04:41.503871   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbd62fe532b2"
	I0327 14:04:41.518281   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:04:41.518292   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:41.530096   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:41.530107   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:41.571060   13860 logs.go:123] Gathering logs for kube-scheduler [7f6f92d5f589] ...
	I0327 14:04:41.571070   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f6f92d5f589"
	I0327 14:04:41.587597   13860 logs.go:123] Gathering logs for kube-proxy [e67b6e619e2c] ...
	I0327 14:04:41.587609   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e67b6e619e2c"
	I0327 14:04:41.604423   13860 logs.go:123] Gathering logs for storage-provisioner [630633d3300b] ...
	I0327 14:04:41.604435   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 630633d3300b"
	I0327 14:04:42.423304   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:42.423403   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:44.122711   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:47.425997   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:47.426145   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:49.125268   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:49.125437   13860 kubeadm.go:591] duration metric: took 4m4.519984334s to restartPrimaryControlPlane
	W0327 14:04:49.125575   13860 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 14:04:49.125634   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 14:04:50.128270   13860 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002635125s)
	I0327 14:04:50.128335   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 14:04:50.133346   13860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:04:50.136543   13860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:04:50.139382   13860 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:04:50.139388   13860 kubeadm.go:156] found existing configuration files:
	
	I0327 14:04:50.139407   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf
	I0327 14:04:50.142171   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:04:50.142198   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:04:50.144772   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf
	I0327 14:04:50.147756   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:04:50.147783   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:04:50.150694   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf
	I0327 14:04:50.153051   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:04:50.153073   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:04:50.156001   13860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf
	I0327 14:04:50.159185   13860 kubeadm.go:162] "https://control-plane.minikube.internal:52300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:04:50.159206   13860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:04:50.161673   13860 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 14:04:50.178840   13860 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 14:04:50.178877   13860 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 14:04:50.224950   13860 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 14:04:50.225015   13860 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 14:04:50.225061   13860 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 14:04:50.280960   13860 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 14:04:50.285033   13860 out.go:204]   - Generating certificates and keys ...
	I0327 14:04:50.285068   13860 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 14:04:50.285105   13860 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 14:04:50.285164   13860 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 14:04:50.285197   13860 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 14:04:50.285237   13860 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 14:04:50.285267   13860 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 14:04:50.285309   13860 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 14:04:50.285352   13860 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 14:04:50.285386   13860 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 14:04:50.285433   13860 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 14:04:50.285456   13860 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 14:04:50.285488   13860 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 14:04:50.444152   13860 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 14:04:50.578539   13860 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 14:04:50.742162   13860 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 14:04:50.834903   13860 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 14:04:50.864410   13860 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 14:04:50.864821   13860 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 14:04:50.864925   13860 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 14:04:50.949614   13860 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 14:04:50.953859   13860 out.go:204]   - Booting up control plane ...
	I0327 14:04:50.953915   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 14:04:50.954003   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 14:04:50.954075   13860 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 14:04:50.954123   13860 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 14:04:50.954245   13860 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 14:04:52.428694   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:52.428862   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:52.440180   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:04:52.440265   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:52.452167   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:04:52.452233   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:52.462872   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:04:52.462946   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:52.473381   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:04:52.473462   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:52.483806   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:04:52.483869   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:52.494925   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:04:52.495005   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:52.504715   14042 logs.go:276] 0 containers: []
	W0327 14:04:52.504728   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:52.504790   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:52.516285   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:04:52.516304   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:04:52.516319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:04:52.535194   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:04:52.535204   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:52.547782   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:04:52.547793   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:04:52.562208   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:04:52.562221   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:04:52.589945   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:04:52.589960   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:04:52.610085   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:04:52.610097   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:04:52.622244   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:04:52.622257   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:04:52.645102   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:04:52.645116   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:04:52.657761   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:52.657774   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:52.698357   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:04:52.698378   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:04:52.713257   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:52.713268   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:52.738865   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:04:52.738883   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:04:52.751344   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:04:52.751365   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:04:52.764505   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:52.764517   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:52.768872   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:52.768880   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:52.893517   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:04:52.893529   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:04:55.463602   13860 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.509532 seconds
	I0327 14:04:55.463671   13860 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 14:04:55.468055   13860 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 14:04:55.976603   13860 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 14:04:55.976743   13860 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-823000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 14:04:56.480495   13860 kubeadm.go:309] [bootstrap-token] Using token: k1fncm.sxv8egnmuwflk0mk
	I0327 14:04:56.482160   13860 out.go:204]   - Configuring RBAC rules ...
	I0327 14:04:56.482221   13860 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 14:04:56.482797   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 14:04:56.489152   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 14:04:56.490073   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 14:04:56.491054   13860 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 14:04:56.491910   13860 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 14:04:56.496006   13860 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 14:04:56.645989   13860 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 14:04:56.885052   13860 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 14:04:56.885784   13860 kubeadm.go:309] 
	I0327 14:04:56.885825   13860 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 14:04:56.885829   13860 kubeadm.go:309] 
	I0327 14:04:56.885902   13860 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 14:04:56.885941   13860 kubeadm.go:309] 
	I0327 14:04:56.885959   13860 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 14:04:56.885997   13860 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 14:04:56.886076   13860 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 14:04:56.886096   13860 kubeadm.go:309] 
	I0327 14:04:56.886128   13860 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 14:04:56.886131   13860 kubeadm.go:309] 
	I0327 14:04:56.886156   13860 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 14:04:56.886161   13860 kubeadm.go:309] 
	I0327 14:04:56.886196   13860 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 14:04:56.886272   13860 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 14:04:56.886344   13860 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 14:04:56.886350   13860 kubeadm.go:309] 
	I0327 14:04:56.886425   13860 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 14:04:56.886542   13860 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 14:04:56.886546   13860 kubeadm.go:309] 
	I0327 14:04:56.886587   13860 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token k1fncm.sxv8egnmuwflk0mk \
	I0327 14:04:56.886632   13860 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf \
	I0327 14:04:56.886644   13860 kubeadm.go:309] 	--control-plane 
	I0327 14:04:56.886646   13860 kubeadm.go:309] 
	I0327 14:04:56.886695   13860 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 14:04:56.886697   13860 kubeadm.go:309] 
	I0327 14:04:56.886742   13860 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token k1fncm.sxv8egnmuwflk0mk \
	I0327 14:04:56.886798   13860 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf 
	I0327 14:04:56.886847   13860 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 14:04:56.886858   13860 cni.go:84] Creating CNI manager for ""
	I0327 14:04:56.886866   13860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:04:56.892669   13860 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 14:04:56.899577   13860 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 14:04:56.902426   13860 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 14:04:56.907113   13860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 14:04:56.907195   13860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-823000 minikube.k8s.io/updated_at=2024_03_27T14_04_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=running-upgrade-823000 minikube.k8s.io/primary=true
	I0327 14:04:56.907226   13860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 14:04:56.910537   13860 ops.go:34] apiserver oom_adj: -16
	I0327 14:04:56.950798   13860 kubeadm.go:1107] duration metric: took 43.665667ms to wait for elevateKubeSystemPrivileges
	W0327 14:04:56.950905   13860 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 14:04:56.950911   13860 kubeadm.go:393] duration metric: took 4m12.358969833s to StartCluster
	I0327 14:04:56.950920   13860 settings.go:142] acquiring lock: {Name:mkdd1901c274fdaab611fbdc96cb9f09e61b9c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:04:56.951086   13860 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:04:56.951478   13860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:04:56.951689   13860 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:04:56.955728   13860 out.go:177] * Verifying Kubernetes components...
	I0327 14:04:56.951707   13860 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 14:04:56.951864   13860 config.go:182] Loaded profile config "running-upgrade-823000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:04:56.963662   13860 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-823000"
	I0327 14:04:56.963675   13860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:04:56.963677   13860 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-823000"
	W0327 14:04:56.963680   13860 addons.go:243] addon storage-provisioner should already be in state true
	I0327 14:04:56.963677   13860 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-823000"
	I0327 14:04:56.963695   13860 host.go:66] Checking if "running-upgrade-823000" exists ...
	I0327 14:04:56.963701   13860 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-823000"
	I0327 14:04:56.964915   13860 kapi.go:59] client config for running-upgrade-823000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/running-upgrade-823000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102607020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:04:56.965728   13860 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-823000"
	W0327 14:04:56.965735   13860 addons.go:243] addon default-storageclass should already be in state true
	I0327 14:04:56.965742   13860 host.go:66] Checking if "running-upgrade-823000" exists ...
	I0327 14:04:56.970702   13860 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:04:56.973576   13860 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:04:56.973582   13860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 14:04:56.973588   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:04:56.974245   13860 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 14:04:56.974250   13860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 14:04:56.974253   13860 sshutil.go:53] new ssh client: &{IP:localhost Port:52268 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/running-upgrade-823000/id_rsa Username:docker}
	I0327 14:04:57.042839   13860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:04:57.047769   13860 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:04:57.047805   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:04:57.052215   13860 api_server.go:72] duration metric: took 100.51625ms to wait for apiserver process to appear ...
	I0327 14:04:57.052224   13860 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:04:57.052231   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:57.081328   13860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:04:57.081902   13860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 14:04:55.410082   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:02.052845   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:02.052878   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:00.411219   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:00.411438   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:00.441377   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:00.441472   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:00.458905   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:00.458974   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:00.470244   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:00.470304   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:00.480413   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:00.480483   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:00.493407   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:00.493473   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:00.504297   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:00.504362   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:00.514568   14042 logs.go:276] 0 containers: []
	W0327 14:05:00.514581   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:00.514637   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:00.533254   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:00.533282   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:00.533288   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:00.545189   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:00.545200   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:00.558529   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:00.558539   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:00.569812   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:00.569824   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:00.581170   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:00.581181   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:00.592986   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:00.592996   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:00.604999   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:00.605009   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:00.631570   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:00.631588   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:00.670633   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:00.670641   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:00.675314   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:00.675324   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:00.691574   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:00.691583   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:00.729343   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:00.729356   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:00.744321   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:00.744332   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:00.759148   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:00.759159   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:00.784328   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:00.784338   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:00.795940   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:00.795951   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:03.320838   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:07.054167   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:07.054209   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:08.322815   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:08.323015   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:08.336539   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:08.336614   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:08.347593   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:08.347667   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:08.358123   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:08.358189   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:08.369442   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:08.369512   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:08.379886   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:08.379955   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:08.390441   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:08.390513   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:08.400932   14042 logs.go:276] 0 containers: []
	W0327 14:05:08.400943   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:08.401006   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:08.411474   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:08.411492   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:08.411498   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:08.428953   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:08.428963   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:08.453706   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:08.453713   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:08.467312   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:08.467323   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:08.478336   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:08.478346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:08.489662   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:08.489671   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:08.508014   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:08.508024   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:08.544659   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:08.544670   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:08.569110   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:08.569120   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:08.584398   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:08.584407   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:08.596509   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:08.596521   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:08.611899   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:08.611908   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:08.629531   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:08.629542   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:08.642304   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:08.642316   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:08.658277   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:08.658286   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:08.696492   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:08.696501   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:12.054423   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:12.054448   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:11.202960   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:17.054643   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:17.054667   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:16.205111   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:16.205218   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:16.216330   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:16.216396   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:16.226873   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:16.226945   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:16.237214   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:16.237286   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:16.247782   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:16.247851   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:16.258131   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:16.258197   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:16.270172   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:16.270254   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:16.280222   14042 logs.go:276] 0 containers: []
	W0327 14:05:16.280235   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:16.280304   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:16.290731   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:16.290747   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:16.290752   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:16.315152   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:16.315160   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:16.352807   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:16.352815   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:16.366178   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:16.366187   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:16.383293   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:16.383304   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:16.395174   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:16.395184   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:16.406511   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:16.406522   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:16.421130   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:16.421145   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:16.434139   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:16.434158   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:16.471979   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:16.471992   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:16.483894   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:16.483917   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:16.496714   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:16.496727   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:16.510437   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:16.510449   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:16.522178   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:16.522189   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:16.526643   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:16.526651   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:16.540939   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:16.540950   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:22.054963   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:22.055011   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:19.068595   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:27.055476   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:27.055513   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 14:05:27.440237   13860 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 14:05:27.444625   13860 out.go:177] * Enabled addons: storage-provisioner
	I0327 14:05:27.452490   13860 addons.go:505] duration metric: took 30.501221708s for enable addons: enabled=[storage-provisioner]
	I0327 14:05:24.071142   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:24.071318   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:24.082983   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:24.083053   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:24.094444   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:24.094514   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:24.105391   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:24.105466   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:24.116831   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:24.116913   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:24.130734   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:24.130803   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:24.141950   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:24.142023   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:24.155258   14042 logs.go:276] 0 containers: []
	W0327 14:05:24.155273   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:24.155333   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:24.166021   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:24.166040   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:24.166046   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:24.181450   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:24.181461   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:24.217890   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:24.217901   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:24.232025   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:24.232034   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:24.242903   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:24.242917   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:24.255153   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:24.255163   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:24.266864   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:24.266876   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:24.271466   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:24.271476   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:24.307241   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:24.307253   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:24.331656   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:24.331669   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:24.348896   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:24.348905   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:24.361035   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:24.361046   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:24.385116   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:24.385123   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:24.396353   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:24.396364   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:24.410197   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:24.410208   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:24.429804   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:24.429816   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:26.943522   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:32.056256   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:32.056274   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:31.945859   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:31.946245   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:31.978068   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:31.978210   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:31.998370   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:31.998465   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:32.013081   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:32.013185   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:32.025188   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:32.025261   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:32.035694   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:32.035768   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:32.048704   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:32.048773   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:32.058833   14042 logs.go:276] 0 containers: []
	W0327 14:05:32.058845   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:32.058897   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:32.069696   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:32.069716   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:32.069722   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:32.108245   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:32.108257   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:32.123762   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:32.123774   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:32.135985   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:32.135998   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:32.148237   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:32.148248   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:32.159801   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:32.159814   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:32.195608   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:32.195623   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:32.207227   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:32.207238   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:32.221582   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:32.221591   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:32.235612   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:32.235624   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:32.247582   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:32.247593   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:32.271697   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:32.271704   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:32.284677   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:32.284688   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:32.289023   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:32.289032   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:32.317800   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:32.317811   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:32.332264   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:32.332276   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:37.057004   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:37.057044   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:34.851068   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:42.058212   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:42.058289   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:39.852224   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:39.852487   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:39.874852   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:39.874959   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:39.890310   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:39.890385   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:39.904721   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:39.904786   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:39.915528   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:39.915601   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:39.925767   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:39.925830   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:39.937053   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:39.937119   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:39.947132   14042 logs.go:276] 0 containers: []
	W0327 14:05:39.947142   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:39.947202   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:39.958927   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:39.958946   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:39.958951   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:39.963423   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:39.963433   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:39.988103   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:39.988115   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:40.006289   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:40.006301   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:40.019533   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:40.019545   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:40.039397   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:40.039409   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:40.064871   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:40.064878   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:40.077598   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:40.077614   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:40.091609   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:40.091623   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:40.126372   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:40.126382   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:40.137840   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:40.137858   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:40.177707   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:40.177730   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:40.199852   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:40.199864   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:40.215596   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:40.215610   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:40.227565   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:40.227579   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:40.244967   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:40.244981   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:42.759032   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:47.059924   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:47.059947   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:47.761319   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:47.761531   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:47.778823   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:47.778908   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:47.794310   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:47.794373   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:47.805523   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:47.805584   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:47.816044   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:47.816113   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:47.826824   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:47.826893   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:47.837423   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:47.837489   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:47.847635   14042 logs.go:276] 0 containers: []
	W0327 14:05:47.847645   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:47.847694   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:47.858143   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:47.858161   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:47.858167   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:47.894510   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:47.894519   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:47.919232   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:47.919245   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:47.934398   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:47.934411   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:47.964185   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:47.964196   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:48.002929   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:48.002941   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:48.017041   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:48.017051   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:48.041787   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:48.041799   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:48.053351   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:48.053360   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:48.065230   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:48.065239   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:48.069639   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:48.069647   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:48.084118   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:48.084129   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:48.094969   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:48.094979   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:48.109335   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:48.109345   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:48.126441   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:48.126453   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:48.140670   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:48.140681   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:52.061748   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:52.061823   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:50.654406   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:57.064156   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:57.064270   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:57.099840   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:05:57.099913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:57.115270   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:05:57.115344   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:57.126181   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:05:57.126251   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:57.136670   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:05:57.136742   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:57.148872   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:05:57.148933   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:57.159768   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:05:57.159839   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:57.170277   13860 logs.go:276] 0 containers: []
	W0327 14:05:57.170290   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:57.170348   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:57.180742   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:05:57.180757   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:57.180764   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:57.185864   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:57.185873   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:57.221863   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:05:57.221875   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:05:57.238259   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:05:57.238270   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:05:57.249953   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:05:57.249970   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:05:57.263162   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:57.263171   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:57.287467   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:57.287474   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:57.321725   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:05:57.321734   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:05:57.333601   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:05:57.333612   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:05:57.348172   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:05:57.348180   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:05:57.364930   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:05:57.364940   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:05:57.378984   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:05:57.378997   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:57.390314   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:05:57.390326   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:05:55.656937   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:55.657221   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:55.683081   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:55.683185   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:55.698462   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:55.698540   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:55.711380   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:55.711453   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:55.726832   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:55.726911   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:55.738282   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:55.738349   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:55.748869   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:55.748934   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:55.759134   14042 logs.go:276] 0 containers: []
	W0327 14:05:55.759143   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:55.759198   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:55.769333   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:55.769353   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:55.769358   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:55.783048   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:55.783058   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:55.794406   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:55.794418   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:55.806419   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:55.806430   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:55.830013   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:55.830024   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:55.865901   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:55.865911   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:55.869960   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:55.869966   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:55.883752   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:55.883762   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:55.895971   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:55.895983   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:55.907482   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:55.907495   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:55.943335   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:55.943346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:55.970526   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:55.970537   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:55.985004   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:55.985018   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:56.000640   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:56.000651   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:56.018157   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:56.018172   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:56.030009   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:56.030020   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:58.543659   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:59.906559   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:03.545954   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:03.546224   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:03.568284   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:03.568387   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:03.583146   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:03.583219   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:03.595689   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:03.595762   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:03.606371   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:03.606447   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:03.616399   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:03.616475   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:03.626736   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:03.626801   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:03.637003   14042 logs.go:276] 0 containers: []
	W0327 14:06:03.637014   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:03.637072   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:03.646875   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:03.646894   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:03.646900   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:03.660688   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:03.660701   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:03.675626   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:03.675637   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:04.909119   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:04.909347   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:04.929739   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:04.929837   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:04.945179   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:04.945258   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:04.957912   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:04.957988   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:04.975657   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:04.975729   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:04.985818   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:04.985885   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:04.996881   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:04.996949   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:05.009813   13860 logs.go:276] 0 containers: []
	W0327 14:06:05.009825   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:05.009884   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:05.020186   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:05.020201   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:05.020207   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:05.035507   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:05.035520   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:05.047720   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:05.047729   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:05.059205   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:05.059219   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:05.093460   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:05.093467   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:05.130431   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:05.130442   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:05.144897   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:05.144907   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:05.156402   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:05.156412   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:05.180448   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:05.180457   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:05.192260   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:05.192270   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:05.197026   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:05.197032   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:05.210841   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:05.210852   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:05.221920   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:05.221931   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:03.713282   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:03.713290   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:03.747306   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:03.747319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:03.759013   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:03.759022   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:03.773596   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:03.773607   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:03.790875   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:03.790889   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:03.813998   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:03.814008   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:03.818226   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:03.818235   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:03.842452   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:03.842464   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:03.856463   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:03.856473   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:03.867870   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:03.867884   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:03.885270   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:03.885280   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:03.899157   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:03.899167   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:03.910956   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:03.910966   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:06.425846   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:07.740361   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:11.428095   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:11.428279   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:11.442749   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:11.442824   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:11.454523   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:11.454593   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:11.467054   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:11.467126   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:11.478308   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:11.478380   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:11.488838   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:11.488908   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:11.499373   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:11.499442   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:11.511522   14042 logs.go:276] 0 containers: []
	W0327 14:06:11.511535   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:11.511595   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:11.522526   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:11.522548   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:11.522554   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:11.547616   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:11.547627   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:11.562392   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:11.562403   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:11.575544   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:11.575555   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:11.587920   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:11.587933   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:11.608542   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:11.608556   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:11.632123   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:11.632134   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:11.649626   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:11.649638   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:11.661580   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:11.661590   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:11.684800   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:11.684808   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:11.721249   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:11.721259   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:11.725463   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:11.725470   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:11.765049   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:11.765061   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:11.779624   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:11.779635   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:11.790688   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:11.790701   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:11.805922   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:11.805935   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:12.742592   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:12.742790   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:12.760647   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:12.760746   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:12.774907   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:12.774980   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:12.786611   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:12.786684   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:12.797385   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:12.797449   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:12.807925   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:12.807994   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:12.818810   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:12.818877   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:12.829465   13860 logs.go:276] 0 containers: []
	W0327 14:06:12.829475   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:12.829533   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:12.839803   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:12.839819   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:12.839825   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:12.853875   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:12.853885   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:12.865261   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:12.865273   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:12.877084   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:12.877094   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:12.894103   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:12.894112   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:12.905227   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:12.905237   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:12.928931   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:12.928941   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:12.940298   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:12.940311   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:12.976297   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:12.976308   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:12.981303   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:12.981311   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:13.002506   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:13.002517   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:13.014471   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:13.014488   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:13.029767   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:13.029778   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:15.566052   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:14.319638   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:20.568305   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:20.568524   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:20.590602   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:20.590709   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:20.605584   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:20.605672   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:20.618686   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:20.618753   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:20.629800   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:20.629870   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:20.640568   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:20.640641   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:20.650736   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:20.650800   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:20.660749   13860 logs.go:276] 0 containers: []
	W0327 14:06:20.660764   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:20.660825   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:20.671552   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:20.671567   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:20.671576   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:20.682751   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:20.682765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:20.707233   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:20.707241   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:20.718800   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:20.718810   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:20.753028   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:20.753039   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:20.790540   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:20.790551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:20.805370   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:20.805383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:20.816668   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:20.816679   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:20.828218   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:20.828234   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:20.833065   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:20.833072   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:20.846771   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:20.846783   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:20.860989   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:20.861001   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:20.874945   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:20.874958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:19.321893   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:19.322124   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:19.343538   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:19.343633   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:19.365486   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:19.365568   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:19.377024   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:19.377094   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:19.390680   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:19.390757   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:19.401109   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:19.401175   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:19.411427   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:19.411502   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:19.421556   14042 logs.go:276] 0 containers: []
	W0327 14:06:19.421566   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:19.421622   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:19.432935   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:19.432955   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:19.432961   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:19.469493   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:19.469502   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:19.493595   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:19.493604   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:19.505202   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:19.505213   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:19.510652   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:19.510664   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:19.534676   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:19.534686   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:19.546534   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:19.546544   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:19.561615   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:19.561625   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:19.578813   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:19.578824   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:19.614354   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:19.614366   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:19.628039   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:19.628049   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:19.641758   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:19.641768   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:19.653869   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:19.653879   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:19.665618   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:19.665628   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:19.680215   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:19.680227   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:19.691785   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:19.691796   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:22.208804   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:23.393154   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:27.211047   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:27.211223   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:27.223388   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:27.223468   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:27.235418   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:27.235492   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:27.245948   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:27.246017   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:27.259663   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:27.259742   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:27.270584   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:27.270650   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:27.280919   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:27.280984   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:27.291055   14042 logs.go:276] 0 containers: []
	W0327 14:06:27.291071   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:27.291129   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:27.301285   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:27.301302   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:27.301308   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:27.312827   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:27.312842   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:27.316840   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:27.316850   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:27.350317   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:27.350330   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:27.369401   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:27.369412   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:27.380908   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:27.380919   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:27.403434   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:27.403443   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:27.427848   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:27.427865   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:27.449255   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:27.449264   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:27.460937   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:27.460957   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:27.478357   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:27.478368   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:27.504867   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:27.504879   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:27.516360   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:27.516372   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:27.534697   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:27.534707   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:27.573965   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:27.573974   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:27.588407   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:27.588418   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:28.395465   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:28.395882   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:28.448573   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:28.448701   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:28.467543   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:28.467619   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:28.480524   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:28.480622   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:28.493821   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:28.493890   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:28.504811   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:28.504887   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:28.516501   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:28.516580   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:28.531290   13860 logs.go:276] 0 containers: []
	W0327 14:06:28.531300   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:28.531352   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:28.548594   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:28.548608   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:28.548614   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:28.560143   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:28.560154   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:28.577257   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:28.577268   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:28.588503   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:28.588519   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:28.622379   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:28.622385   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:28.626512   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:28.626520   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:28.640590   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:28.640602   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:28.655597   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:28.655607   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:28.668946   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:28.668968   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:28.693938   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:28.693946   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:28.728084   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:28.728095   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:28.743717   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:28.743728   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:28.754774   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:28.754783   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:31.268614   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:30.107345   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:36.270788   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:36.270937   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:36.282519   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:36.282596   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:36.296406   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:36.296474   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:36.311503   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:36.311582   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:36.321662   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:36.321731   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:36.331947   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:36.332018   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:36.342442   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:36.342507   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:36.355360   13860 logs.go:276] 0 containers: []
	W0327 14:06:36.355371   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:36.355427   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:36.366000   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:36.366014   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:36.366019   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:36.377755   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:36.377765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:36.411296   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:36.411304   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:36.415433   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:36.415440   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:36.449597   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:36.449610   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:36.464040   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:36.464052   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:36.479301   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:36.479314   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:36.492397   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:36.492408   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:36.507510   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:36.507522   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:36.519015   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:36.519027   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:36.530691   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:36.530701   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:36.545374   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:36.545383   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:36.563015   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:36.563025   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:35.109836   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:35.110138   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:35.137860   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:35.137992   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:35.157243   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:35.157322   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:35.176575   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:35.176648   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:35.190290   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:35.190360   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:35.200859   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:35.200919   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:35.216708   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:35.216775   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:35.227076   14042 logs.go:276] 0 containers: []
	W0327 14:06:35.227087   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:35.227140   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:35.238003   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:35.238021   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:35.238027   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:35.274800   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:35.274811   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:35.294156   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:35.294166   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:35.330346   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:35.330357   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:35.348614   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:35.348624   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:35.363342   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:35.363357   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:35.376604   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:35.376616   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:35.400241   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:35.400249   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:35.438209   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:35.438219   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:35.449937   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:35.449952   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:35.462199   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:35.462214   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:35.474957   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:35.474968   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:35.486816   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:35.486828   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:35.500947   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:35.500958   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:35.524618   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:35.524628   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:35.536458   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:35.536470   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:38.041315   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:39.088341   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:43.043230   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:43.043447   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:43.061137   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:43.061223   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:43.074744   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:43.074816   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:43.085923   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:43.085998   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:43.096251   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:43.096450   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:43.108163   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:43.108232   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:43.125412   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:43.125489   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:43.136536   14042 logs.go:276] 0 containers: []
	W0327 14:06:43.136549   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:43.136607   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:43.147156   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:43.147173   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:43.147180   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:43.165375   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:43.165389   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:43.176885   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:43.176898   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:43.188270   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:43.188279   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:43.226249   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:43.226266   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:43.262356   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:43.262367   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:43.276303   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:43.276319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:43.296674   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:43.296684   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:43.307841   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:43.307853   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:43.312359   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:43.312364   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:43.335942   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:43.335952   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:43.347541   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:43.347551   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:43.359336   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:43.359348   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:43.381840   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:43.381847   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:43.399648   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:43.399658   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:43.414128   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:43.414138   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:44.091040   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:44.091456   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:44.135325   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:44.135468   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:44.155818   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:44.155915   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:44.175717   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:44.175794   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:44.189643   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:44.189716   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:44.200897   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:44.200970   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:44.211610   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:44.211688   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:44.221788   13860 logs.go:276] 0 containers: []
	W0327 14:06:44.221801   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:44.221855   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:44.232671   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:44.232685   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:44.232689   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:44.244368   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:44.244378   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:44.256575   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:44.256589   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:44.271817   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:44.271827   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:44.283318   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:44.283332   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:44.307763   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:44.307770   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:44.341880   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:44.341889   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:44.346390   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:44.346396   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:44.360665   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:44.360675   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:44.372458   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:44.372467   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:44.389802   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:44.389811   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:44.436155   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:44.436166   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:44.451703   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:44.451716   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:46.969663   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:45.929634   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:51.971923   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:51.972069   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:51.988942   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:51.989027   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:52.009062   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:52.009136   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:52.019486   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:52.019551   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:52.029779   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:52.029850   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:52.041148   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:52.041221   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:52.051832   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:52.051903   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:52.061655   13860 logs.go:276] 0 containers: []
	W0327 14:06:52.061665   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:52.061722   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:52.072271   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:52.072287   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:06:52.072294   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:06:52.084464   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:52.084474   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:52.101774   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:06:52.101785   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:06:52.115744   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:06:52.115757   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:52.126940   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:52.126951   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:52.161958   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:52.161969   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:52.201418   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:06:52.201432   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:06:52.213144   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:52.213157   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:52.224756   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:52.224766   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:52.239602   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:52.239614   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:52.264024   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:52.264031   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:52.268406   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:52.268414   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:52.286552   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:06:52.286563   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:06:50.930702   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:50.930846   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:50.941840   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:50.941914   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:50.952749   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:50.952821   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:50.963443   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:50.963502   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:50.973886   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:50.973958   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:50.985613   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:50.985679   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:50.996674   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:50.996745   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:51.006859   14042 logs.go:276] 0 containers: []
	W0327 14:06:51.006873   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:51.006933   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:51.017491   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:51.017510   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:51.017516   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:51.033987   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:51.034000   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:51.052452   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:51.052462   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:51.066295   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:51.066308   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:51.077923   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:51.077934   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:51.091698   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:51.091709   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:51.117250   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:51.117262   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:51.135314   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:51.135325   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:51.146772   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:51.146786   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:51.169988   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:51.169995   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:51.181374   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:51.181388   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:51.185457   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:51.185463   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:51.219536   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:51.219548   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:51.232088   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:51.232100   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:51.243423   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:51.243436   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:51.279706   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:51.279719   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:54.802476   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:53.795571   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:59.804656   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:59.804796   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:59.818322   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:06:59.818406   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:59.831505   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:06:59.831573   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:59.842472   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:06:59.842545   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:59.853152   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:06:59.853216   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:59.864344   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:06:59.864413   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:59.874430   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:06:59.874502   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:59.884843   13860 logs.go:276] 0 containers: []
	W0327 14:06:59.884853   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:59.884913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:59.895106   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:06:59.895123   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:06:59.895128   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:06:59.912496   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:06:59.912505   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:06:59.924381   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:06:59.924392   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:06:59.939098   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:06:59.939108   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:06:59.957232   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:59.957243   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:59.992010   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:59.992021   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:59.996479   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:59.996484   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:00.032178   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:00.032188   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:00.046836   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:00.046846   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:00.058207   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:00.058218   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:00.073003   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:00.073017   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:00.084338   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:00.084349   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:00.107955   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:00.107963   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:58.797850   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:58.798019   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:58.809501   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:58.809588   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:58.820522   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:58.820589   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:58.831077   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:58.831150   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:58.841172   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:58.841243   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:58.851533   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:58.851609   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:58.865675   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:58.865747   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:58.875819   14042 logs.go:276] 0 containers: []
	W0327 14:06:58.875830   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:58.875888   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:58.886437   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:58.886454   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:58.886460   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:58.923265   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:58.923279   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:58.953178   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:58.953188   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:58.964409   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:58.964421   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:58.981935   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:58.981947   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:58.993650   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:58.993660   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:59.005462   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:59.005476   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:59.043143   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:59.043152   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:59.047754   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:59.047760   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:59.061845   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:59.061858   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:59.077698   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:59.077709   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:59.089191   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:59.089201   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:59.111552   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:59.111561   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:59.126186   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:59.126196   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:59.140006   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:59.140017   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:59.154442   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:59.154452   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:01.667898   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:02.624741   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:06.669459   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:06.669624   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:06.691077   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:06.691180   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:06.705287   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:06.705363   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:06.717371   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:06.717439   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:06.737465   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:06.737541   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:06.748106   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:06.748173   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:06.758605   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:06.758676   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:06.768930   14042 logs.go:276] 0 containers: []
	W0327 14:07:06.768943   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:06.769002   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:06.779675   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:06.779712   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:06.779719   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:06.784123   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:06.784129   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:06.800934   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:06.800944   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:06.824818   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:06.824828   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:06.863639   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:06.863654   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:06.898383   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:06.898398   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:06.910456   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:06.910467   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:06.922424   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:06.922437   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:06.935327   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:06.935340   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:06.971166   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:06.971178   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:06.985104   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:06.985119   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:07.000444   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:07.000457   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:07.013347   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:07.013358   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:07.027515   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:07.027524   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:07.042568   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:07.042582   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:07.054036   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:07.054050   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:07.625976   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:07.626162   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:07.643872   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:07.643962   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:07.658820   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:07.658897   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:07.671962   13860 logs.go:276] 2 containers: [7bac344b8241 3db4399ee448]
	I0327 14:07:07.672032   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:07.683278   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:07.683348   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:07.693889   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:07.693957   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:07.704504   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:07.704575   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:07.714048   13860 logs.go:276] 0 containers: []
	W0327 14:07:07.714058   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:07.714115   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:07.725912   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:07.725928   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:07.725934   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:07.743055   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:07.743067   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:07.766364   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:07.766371   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:07.777369   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:07.777382   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:07.792095   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:07.792110   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:07.797033   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:07.797040   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:07.831258   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:07.831272   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:07.845097   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:07.845110   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:07.856973   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:07.856985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:07.868430   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:07.868442   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:07.882617   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:07.882628   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:07.895354   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:07.895364   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:07.928473   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:07.928482   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:10.455721   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:09.573359   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:15.458102   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:15.458309   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:15.476681   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:15.476769   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:15.490604   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:15.490669   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:15.503597   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:15.503671   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:15.514277   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:15.514339   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:15.525178   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:15.525245   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:15.539548   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:15.539618   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:15.549657   13860 logs.go:276] 0 containers: []
	W0327 14:07:15.549675   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:15.549737   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:15.559804   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:15.559822   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:15.559828   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:15.564566   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:15.564571   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:15.578662   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:15.578672   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:15.590653   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:15.590663   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:15.602107   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:15.602118   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:15.619403   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:15.619413   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:15.633769   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:15.633779   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:15.645185   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:15.645195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:15.664032   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:15.664043   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:15.676476   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:15.676486   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:15.712579   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:15.712591   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:15.724635   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:15.724647   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:15.736740   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:15.736750   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:15.771429   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:15.771445   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:15.783219   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:15.783229   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:14.575607   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:14.575886   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:14.605345   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:14.605474   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:14.623569   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:14.623650   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:14.637306   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:14.637379   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:14.648923   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:14.648995   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:14.659482   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:14.659557   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:14.670024   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:14.670102   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:14.680088   14042 logs.go:276] 0 containers: []
	W0327 14:07:14.680102   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:14.680154   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:14.691450   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:14.691467   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:14.691474   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:14.696766   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:14.696780   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:14.713475   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:14.713490   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:14.741347   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:14.741360   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:14.761762   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:14.761776   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:14.796318   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:14.796330   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:14.811291   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:14.811307   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:14.825876   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:14.825890   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:14.848854   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:14.848861   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:14.886272   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:14.886281   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:14.900436   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:14.900450   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:14.912625   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:14.912639   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:14.929423   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:14.929437   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:14.943131   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:14.943143   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:14.957876   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:14.957890   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:14.982727   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:14.982737   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:17.495661   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:18.308708   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:22.497896   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:22.498062   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:22.510389   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:22.510470   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:22.521156   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:22.521229   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:22.531780   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:22.531842   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:22.542300   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:22.542373   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:22.561356   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:22.561432   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:22.574716   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:22.574788   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:22.585165   14042 logs.go:276] 0 containers: []
	W0327 14:07:22.585177   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:22.585234   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:22.595862   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:22.595879   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:22.595884   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:22.608241   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:22.608252   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:22.645634   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:22.645649   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:22.660491   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:22.660504   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:22.676154   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:22.676167   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:22.691347   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:22.691359   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:22.696087   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:22.696096   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:22.716001   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:22.716011   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:22.727131   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:22.727141   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:22.739981   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:22.739993   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:22.763202   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:22.763218   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:22.788676   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:22.788687   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:22.800180   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:22.800192   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:22.819077   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:22.819091   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:22.858061   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:22.858077   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:22.871792   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:22.871804   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:23.310984   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:23.311179   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:23.327457   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:23.327545   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:23.340052   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:23.340120   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:23.350909   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:23.350984   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:23.361607   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:23.361671   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:23.372666   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:23.372732   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:23.383242   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:23.383309   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:23.393492   13860 logs.go:276] 0 containers: []
	W0327 14:07:23.393503   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:23.393560   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:23.403588   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:23.403605   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:23.403611   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:23.415184   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:23.415195   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:23.433118   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:23.433129   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:23.447812   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:23.447822   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:23.461725   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:23.461734   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:23.472809   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:23.472821   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:23.489988   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:23.490000   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:23.503601   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:23.503613   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:23.516552   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:23.516563   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:23.541047   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:23.541058   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:23.553378   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:23.553388   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:23.587152   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:23.587160   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:23.591331   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:23.591337   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:23.625673   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:23.625686   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:23.641044   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:23.641055   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:26.154931   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:25.385841   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:31.157193   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:31.157357   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:31.172632   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:31.172715   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:31.184905   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:31.184973   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:31.195815   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:31.195896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:31.206167   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:31.206228   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:31.216646   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:31.216705   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:31.226836   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:31.226908   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:31.237022   13860 logs.go:276] 0 containers: []
	W0327 14:07:31.237034   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:31.237095   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:31.247977   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:31.247991   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:31.247997   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:31.271343   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:31.271353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:31.284212   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:31.284221   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:31.295564   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:31.295575   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:31.310567   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:31.310580   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:31.328743   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:31.328752   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:31.340062   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:31.340075   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:31.353708   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:31.353721   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:31.365106   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:31.365117   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:31.379223   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:31.379231   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:31.383544   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:31.383554   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:31.418049   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:31.418061   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:31.430011   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:31.430022   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:31.441874   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:31.441885   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:31.453604   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:31.453613   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:30.388443   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:30.388769   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:30.418857   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:30.418988   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:30.439826   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:30.439918   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:30.452956   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:30.453027   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:30.464140   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:30.464209   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:30.474509   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:30.474579   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:30.485765   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:30.485835   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:30.497456   14042 logs.go:276] 0 containers: []
	W0327 14:07:30.497468   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:30.497526   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:30.508323   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:30.508340   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:30.508346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:30.520501   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:30.520513   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:30.532307   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:30.532320   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:30.544099   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:30.544110   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:30.567218   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:30.567230   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:30.571884   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:30.571893   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:30.587415   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:30.587425   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:30.599757   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:30.599767   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:30.634861   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:30.634878   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:30.649171   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:30.649180   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:30.674489   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:30.674500   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:30.688457   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:30.688473   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:30.702813   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:30.702827   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:30.719923   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:30.719933   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:30.734986   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:30.734999   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:30.747170   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:30.747181   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:33.287952   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:33.989429   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:38.290274   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:38.290504   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:38.313746   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:38.313865   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:38.332069   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:38.332141   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:38.349514   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:38.349582   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:38.359434   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:38.359507   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:38.374519   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:38.374585   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:38.384914   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:38.384986   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:38.395119   14042 logs.go:276] 0 containers: []
	W0327 14:07:38.395130   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:38.395182   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:38.405810   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:38.405827   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:38.405834   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:38.417778   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:38.417788   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:38.422158   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:38.422165   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:38.434250   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:38.434261   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:38.451254   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:38.451264   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:38.463091   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:38.463102   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:38.499371   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:38.499380   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:38.521095   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:38.521102   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:38.535522   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:38.535534   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:38.547642   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:38.547652   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:38.585902   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:38.585914   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:38.609711   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:38.609722   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:38.623348   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:38.623362   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:38.642065   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:38.642076   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:38.656123   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:38.656133   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:38.670658   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:38.670667   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:38.991657   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:38.991792   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:39.006056   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:39.006118   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:39.017827   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:39.017896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:39.028120   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:39.028187   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:39.038798   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:39.038863   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:39.052003   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:39.052070   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:39.062852   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:39.062913   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:39.073303   13860 logs.go:276] 0 containers: []
	W0327 14:07:39.073316   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:39.073370   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:39.084362   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:39.084380   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:39.084386   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:39.096058   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:39.096069   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:39.135305   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:39.135318   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:39.148102   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:39.148116   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:39.160307   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:39.160318   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:39.172252   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:39.172260   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:39.206225   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:39.206233   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:39.220743   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:39.220753   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:39.225702   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:39.225708   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:39.237654   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:39.237664   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:39.252340   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:39.252350   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:39.270151   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:39.270160   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:39.294777   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:39.294784   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:39.306084   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:39.306097   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:39.320162   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:39.320171   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:41.833322   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:41.188897   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:46.833538   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:46.833700   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:46.849242   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:46.849326   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:46.861427   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:46.861490   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:46.879339   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:46.879421   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:46.891486   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:46.891557   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:46.901877   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:46.901944   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:46.916166   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:46.916231   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:46.926323   13860 logs.go:276] 0 containers: []
	W0327 14:07:46.926335   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:46.926398   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:46.940516   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:46.940536   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:46.940542   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:46.953910   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:46.953922   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:46.968248   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:46.968259   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:46.985379   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:46.985391   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:46.997165   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:46.997175   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:47.021038   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:47.021047   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:47.025827   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:47.025836   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:47.061473   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:47.061487   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:47.076572   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:47.076583   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:47.110833   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:47.110842   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:47.123015   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:47.123026   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:47.135540   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:47.135551   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:47.147520   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:47.147531   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:47.167594   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:47.167604   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:47.186149   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:47.186162   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:46.191206   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:46.191369   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:46.202217   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:46.202291   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:46.213220   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:46.213292   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:46.226405   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:46.226472   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:46.236944   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:46.237013   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:46.249478   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:46.249542   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:46.260532   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:46.260601   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:46.270672   14042 logs.go:276] 0 containers: []
	W0327 14:07:46.270683   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:46.270740   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:46.284820   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:46.284836   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:46.284842   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:46.288860   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:46.288870   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:46.323051   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:46.323064   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:46.337661   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:46.337671   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:46.349278   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:46.349289   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:46.385883   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:46.385891   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:46.409890   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:46.409900   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:46.421251   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:46.421261   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:46.438025   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:46.438037   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:46.451624   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:46.451635   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:46.466885   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:46.466895   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:46.481632   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:46.481642   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:46.493234   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:46.493244   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:46.511011   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:46.511021   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:46.523642   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:46.523653   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:46.535702   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:46.535712   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:49.703178   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:49.060512   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:54.063159   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:54.063262   14042 kubeadm.go:591] duration metric: took 4m3.30753925s to restartPrimaryControlPlane
	W0327 14:07:54.063360   14042 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 14:07:54.063403   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 14:07:55.142916   14042 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.079512083s)
	I0327 14:07:55.142983   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 14:07:55.148227   14042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:07:55.151316   14042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:07:55.154097   14042 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:07:55.154106   14042 kubeadm.go:156] found existing configuration files:
	
	I0327 14:07:55.154129   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf
	I0327 14:07:55.156524   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:07:55.156545   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:07:55.159051   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf
	I0327 14:07:55.161979   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:07:55.162000   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:07:55.164404   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf
	I0327 14:07:55.167205   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:07:55.167229   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:07:55.170283   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf
	I0327 14:07:55.172733   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:07:55.172756   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:07:55.175444   14042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 14:07:55.193121   14042 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 14:07:55.193158   14042 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 14:07:55.246597   14042 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 14:07:55.246654   14042 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 14:07:55.246699   14042 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 14:07:55.296137   14042 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 14:07:55.301319   14042 out.go:204]   - Generating certificates and keys ...
	I0327 14:07:55.301391   14042 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 14:07:55.301430   14042 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 14:07:55.301466   14042 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 14:07:55.301504   14042 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 14:07:55.301546   14042 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 14:07:55.301574   14042 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 14:07:55.301602   14042 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 14:07:55.301656   14042 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 14:07:55.301708   14042 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 14:07:55.301782   14042 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 14:07:55.301802   14042 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 14:07:55.301839   14042 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 14:07:55.357770   14042 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 14:07:55.427674   14042 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 14:07:55.599084   14042 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 14:07:55.746491   14042 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 14:07:55.782808   14042 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 14:07:55.783213   14042 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 14:07:55.783234   14042 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 14:07:55.864839   14042 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 14:07:54.705462   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:54.705579   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:54.716973   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:07:54.717041   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:54.727981   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:07:54.728045   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:54.740842   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:07:54.740920   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:54.753171   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:07:54.753253   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:54.764650   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:07:54.764724   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:54.776802   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:07:54.776870   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:54.788145   13860 logs.go:276] 0 containers: []
	W0327 14:07:54.788156   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:54.788212   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:54.799327   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:07:54.799347   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:07:54.799353   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:07:54.819522   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:07:54.819539   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:07:54.832146   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:54.832158   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:54.857196   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:54.857221   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:54.904738   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:07:54.904753   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:07:54.936121   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:07:54.936133   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:07:54.951035   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:07:54.951046   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:07:54.965997   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:07:54.966008   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:07:54.978213   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:07:54.978225   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:07:54.993650   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:54.993660   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:54.998456   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:07:54.998463   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:07:55.014137   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:07:55.014149   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:07:55.033568   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:07:55.033581   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:07:55.047568   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:55.047583   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:55.085119   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:07:55.085135   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:55.869238   14042 out.go:204]   - Booting up control plane ...
	I0327 14:07:55.869288   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 14:07:55.869332   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 14:07:55.869374   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 14:07:55.869427   14042 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 14:07:55.869509   14042 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 14:08:00.373951   14042 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504324 seconds
	I0327 14:08:00.374089   14042 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 14:08:00.382887   14042 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 14:08:00.893345   14042 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 14:08:00.893461   14042 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-077000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 14:08:01.397269   14042 kubeadm.go:309] [bootstrap-token] Using token: mai42w.fgmzuazwr1bj0hq8
	I0327 14:08:01.401666   14042 out.go:204]   - Configuring RBAC rules ...
	I0327 14:08:01.401734   14042 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 14:08:01.401786   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 14:08:01.404452   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 14:08:01.406494   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 14:08:01.407284   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 14:08:01.408102   14042 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 14:08:01.411068   14042 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 14:08:01.565987   14042 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 14:08:01.801305   14042 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 14:08:01.801763   14042 kubeadm.go:309] 
	I0327 14:08:01.801794   14042 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 14:08:01.801797   14042 kubeadm.go:309] 
	I0327 14:08:01.801834   14042 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 14:08:01.801874   14042 kubeadm.go:309] 
	I0327 14:08:01.801888   14042 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 14:08:01.801926   14042 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 14:08:01.801959   14042 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 14:08:01.801965   14042 kubeadm.go:309] 
	I0327 14:08:01.801995   14042 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 14:08:01.802001   14042 kubeadm.go:309] 
	I0327 14:08:01.802024   14042 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 14:08:01.802027   14042 kubeadm.go:309] 
	I0327 14:08:01.802057   14042 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 14:08:01.802115   14042 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 14:08:01.802156   14042 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 14:08:01.802160   14042 kubeadm.go:309] 
	I0327 14:08:01.802203   14042 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 14:08:01.802295   14042 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 14:08:01.802301   14042 kubeadm.go:309] 
	I0327 14:08:01.802385   14042 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mai42w.fgmzuazwr1bj0hq8 \
	I0327 14:08:01.802436   14042 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf \
	I0327 14:08:01.802446   14042 kubeadm.go:309] 	--control-plane 
	I0327 14:08:01.802448   14042 kubeadm.go:309] 
	I0327 14:08:01.802487   14042 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 14:08:01.802518   14042 kubeadm.go:309] 
	I0327 14:08:01.802561   14042 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mai42w.fgmzuazwr1bj0hq8 \
	I0327 14:08:01.802639   14042 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf 
	I0327 14:08:01.802773   14042 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 14:08:01.802784   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:08:01.802794   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:08:01.809355   14042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 14:08:01.813549   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 14:08:01.816556   14042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 14:08:01.821624   14042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 14:08:01.821664   14042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 14:08:01.821691   14042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-077000 minikube.k8s.io/updated_at=2024_03_27T14_08_01_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=stopped-upgrade-077000 minikube.k8s.io/primary=true
	I0327 14:08:01.874039   14042 kubeadm.go:1107] duration metric: took 52.40775ms to wait for elevateKubeSystemPrivileges
	I0327 14:08:01.874048   14042 ops.go:34] apiserver oom_adj: -16
	W0327 14:08:01.874064   14042 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 14:08:01.874069   14042 kubeadm.go:393] duration metric: took 4m11.132158209s to StartCluster
	I0327 14:08:01.874079   14042 settings.go:142] acquiring lock: {Name:mkdd1901c274fdaab611fbdc96cb9f09e61b9c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:08:01.874159   14042 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:08:01.874609   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:08:01.874813   14042 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:08:01.878413   14042 out.go:177] * Verifying Kubernetes components...
	I0327 14:08:01.874846   14042 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 14:08:01.874891   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:08:01.886412   14042 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-077000"
	I0327 14:08:01.886416   14042 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-077000"
	I0327 14:08:01.886430   14042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-077000"
	I0327 14:08:01.886434   14042 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-077000"
	W0327 14:08:01.886437   14042 addons.go:243] addon storage-provisioner should already be in state true
	I0327 14:08:01.886459   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:08:01.886467   14042 host.go:66] Checking if "stopped-upgrade-077000" exists ...
	I0327 14:08:01.891442   14042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:07:57.600270   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:01.895350   14042 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:08:01.895356   14042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 14:08:01.895364   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:08:01.896335   14042 kapi.go:59] client config for stopped-upgrade-077000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043c3020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:08:01.896456   14042 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-077000"
	W0327 14:08:01.896463   14042 addons.go:243] addon default-storageclass should already be in state true
	I0327 14:08:01.896474   14042 host.go:66] Checking if "stopped-upgrade-077000" exists ...
	I0327 14:08:01.897384   14042 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 14:08:01.897393   14042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 14:08:01.897399   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:08:01.977338   14042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:08:01.982038   14042 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:08:01.982082   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:08:01.985642   14042 api_server.go:72] duration metric: took 110.818666ms to wait for apiserver process to appear ...
	I0327 14:08:01.985649   14042 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:08:01.985656   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:01.997457   14042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:08:01.998334   14042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 14:08:02.602412   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:02.602569   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:02.613095   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:02.613163   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:02.627087   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:02.627158   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:02.637704   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:02.637776   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:02.655534   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:02.655608   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:02.666366   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:02.666434   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:02.677390   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:02.677457   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:02.688766   13860 logs.go:276] 0 containers: []
	W0327 14:08:02.688778   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:02.688846   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:02.705266   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:02.705285   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:02.705290   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:02.717020   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:02.717031   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:02.729372   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:02.729382   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:02.750019   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:02.750033   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:02.764258   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:02.764269   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:02.779851   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:02.779862   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:02.814289   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:02.814300   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:02.849054   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:02.849064   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:02.863288   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:02.863298   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:02.875338   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:02.875349   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:02.887055   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:02.887065   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:02.898800   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:02.898812   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:02.921757   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:02.921765   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:02.926320   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:02.926330   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:02.938785   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:02.938797   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:05.461441   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:06.987690   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:06.987725   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:10.463716   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:10.463923   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:10.486627   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:10.486748   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:10.501847   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:10.501929   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:10.514929   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:10.514997   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:10.526023   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:10.526089   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:10.536751   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:10.536826   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:10.548106   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:10.548174   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:10.558197   13860 logs.go:276] 0 containers: []
	W0327 14:08:10.558210   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:10.558265   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:10.568406   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:10.568421   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:10.568427   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:10.573195   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:10.573204   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:10.588246   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:10.588255   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:10.600083   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:10.600097   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:10.611336   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:10.611346   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:10.623178   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:10.623189   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:10.634944   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:10.634956   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:10.670116   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:10.670127   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:10.684148   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:10.684159   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:10.701672   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:10.701683   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:10.739430   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:10.739440   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:10.754565   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:10.754575   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:10.769305   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:10.769315   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:10.781061   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:10.781072   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:10.797551   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:10.797563   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:11.987910   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:11.987953   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:13.329216   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:16.988529   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:16.988600   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:18.331366   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:18.331460   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:18.343332   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:18.343432   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:18.357533   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:18.357649   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:18.369724   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:18.369812   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:18.381119   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:18.381193   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:18.392511   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:18.392583   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:18.403826   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:18.403899   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:18.414786   13860 logs.go:276] 0 containers: []
	W0327 14:08:18.414796   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:18.414855   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:18.426308   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:18.426330   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:18.426335   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:18.430743   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:18.430749   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:18.444945   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:18.444958   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:18.458586   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:18.458595   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:18.473289   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:18.473301   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:18.485224   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:18.485234   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:18.524972   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:18.524985   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:18.537156   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:18.537168   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:18.552179   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:18.552191   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:18.576669   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:18.576676   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:18.588754   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:18.588766   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:18.624132   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:18.624139   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:18.635390   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:18.635399   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:18.646546   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:18.646558   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:18.658670   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:18.658683   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:21.177829   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:21.989022   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:21.989062   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:26.180042   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:26.180211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:26.195292   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:26.195362   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:26.205946   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:26.206020   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:26.217228   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:26.217294   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:26.228674   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:26.228756   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:26.243628   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:26.243707   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:26.254321   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:26.254383   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:26.264980   13860 logs.go:276] 0 containers: []
	W0327 14:08:26.264991   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:26.265062   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:26.275537   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:26.275556   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:26.275562   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:26.286694   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:26.286704   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:26.301495   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:26.301506   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:26.336363   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:26.336375   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:26.371555   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:26.371568   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:26.383384   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:26.383396   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:26.394670   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:26.394680   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:26.398922   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:26.398929   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:26.411005   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:26.411018   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:26.425592   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:26.425604   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:26.438001   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:26.438011   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:26.449881   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:26.449890   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:26.471544   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:26.471554   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:26.483764   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:26.483774   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:26.506614   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:26.506623   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:26.990122   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:26.990143   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:31.990971   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:31.990993   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 14:08:32.353876   14042 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 14:08:32.359116   14042 out.go:177] * Enabled addons: storage-provisioner
	I0327 14:08:29.022723   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:32.366891   14042 addons.go:505] duration metric: took 30.492475542s for enable addons: enabled=[storage-provisioner]
	I0327 14:08:34.024983   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:34.025211   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:34.046460   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:34.046559   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:34.061802   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:34.061867   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:34.073835   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:34.073907   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:34.084682   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:34.084740   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:34.095125   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:34.095194   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:34.105703   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:34.105766   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:34.118506   13860 logs.go:276] 0 containers: []
	W0327 14:08:34.118517   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:34.118570   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:34.129233   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:34.129252   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:34.129258   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:34.134209   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:34.134217   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:34.146407   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:34.146418   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:34.174773   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:34.174791   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:34.202135   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:34.202150   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:34.214053   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:34.214063   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:34.228439   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:34.228452   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:34.239660   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:34.239670   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:34.255492   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:34.255505   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:34.267457   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:34.267468   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:34.282671   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:34.282683   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:34.316972   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:34.316981   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:34.331438   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:34.331449   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:34.343172   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:34.343181   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:34.377608   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:34.377619   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:36.897682   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:36.992050   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:36.992073   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:41.899876   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:41.899988   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:41.912818   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:41.912896   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:41.923854   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:41.923923   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:41.935912   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:41.935985   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:41.945690   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:41.945761   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:41.956044   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:41.956112   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:41.966718   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:41.966782   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:41.976215   13860 logs.go:276] 0 containers: []
	W0327 14:08:41.976225   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:41.976285   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:41.986426   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:41.986441   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:41.986447   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:42.001216   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:42.001226   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:42.013049   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:42.013060   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:42.027857   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:42.027871   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:42.038957   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:42.038970   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:42.062965   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:42.062976   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:42.097231   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:42.097241   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:42.109378   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:42.109387   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:42.122786   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:42.122798   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:42.159273   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:42.159287   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:42.176910   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:42.176922   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:42.188861   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:42.188869   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:42.201680   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:42.201691   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:42.215503   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:42.215525   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:42.226914   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:42.226927   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:41.993719   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:41.993737   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:44.733355   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:46.995635   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:46.995663   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:49.735713   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:49.735947   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:08:49.755379   13860 logs.go:276] 1 containers: [5c1c6d6f56bf]
	I0327 14:08:49.755472   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:08:49.769987   13860 logs.go:276] 1 containers: [7ecf48f350a3]
	I0327 14:08:49.770060   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:08:49.782282   13860 logs.go:276] 4 containers: [5be8df5cf8a8 a67fef37c8ee 7bac344b8241 3db4399ee448]
	I0327 14:08:49.782359   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:08:49.792628   13860 logs.go:276] 1 containers: [f021c0b0f404]
	I0327 14:08:49.792698   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:08:49.803389   13860 logs.go:276] 1 containers: [b6bf476a5fbc]
	I0327 14:08:49.803461   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:08:49.814133   13860 logs.go:276] 1 containers: [caa62321834f]
	I0327 14:08:49.814201   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:08:49.824651   13860 logs.go:276] 0 containers: []
	W0327 14:08:49.824663   13860 logs.go:278] No container was found matching "kindnet"
	I0327 14:08:49.824725   13860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:08:49.835472   13860 logs.go:276] 1 containers: [fc33b5c2587a]
	I0327 14:08:49.835488   13860 logs.go:123] Gathering logs for kubelet ...
	I0327 14:08:49.835493   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:08:49.869774   13860 logs.go:123] Gathering logs for coredns [5be8df5cf8a8] ...
	I0327 14:08:49.869798   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5be8df5cf8a8"
	I0327 14:08:49.884050   13860 logs.go:123] Gathering logs for coredns [a67fef37c8ee] ...
	I0327 14:08:49.884067   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a67fef37c8ee"
	I0327 14:08:49.897229   13860 logs.go:123] Gathering logs for Docker ...
	I0327 14:08:49.897243   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:08:49.919684   13860 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:08:49.919692   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:08:49.958529   13860 logs.go:123] Gathering logs for coredns [7bac344b8241] ...
	I0327 14:08:49.958540   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bac344b8241"
	I0327 14:08:49.970632   13860 logs.go:123] Gathering logs for container status ...
	I0327 14:08:49.970645   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:08:49.983710   13860 logs.go:123] Gathering logs for dmesg ...
	I0327 14:08:49.983722   13860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:08:49.987969   13860 logs.go:123] Gathering logs for kube-apiserver [5c1c6d6f56bf] ...
	I0327 14:08:49.987975   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c1c6d6f56bf"
	I0327 14:08:50.002187   13860 logs.go:123] Gathering logs for kube-scheduler [f021c0b0f404] ...
	I0327 14:08:50.002199   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f021c0b0f404"
	I0327 14:08:50.016748   13860 logs.go:123] Gathering logs for kube-controller-manager [caa62321834f] ...
	I0327 14:08:50.016758   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa62321834f"
	I0327 14:08:50.037386   13860 logs.go:123] Gathering logs for etcd [7ecf48f350a3] ...
	I0327 14:08:50.037395   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecf48f350a3"
	I0327 14:08:50.051176   13860 logs.go:123] Gathering logs for coredns [3db4399ee448] ...
	I0327 14:08:50.051187   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3db4399ee448"
	I0327 14:08:50.063451   13860 logs.go:123] Gathering logs for kube-proxy [b6bf476a5fbc] ...
	I0327 14:08:50.063462   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6bf476a5fbc"
	I0327 14:08:50.075648   13860 logs.go:123] Gathering logs for storage-provisioner [fc33b5c2587a] ...
	I0327 14:08:50.075658   13860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc33b5c2587a"
	I0327 14:08:51.997756   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:51.997783   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:52.589484   13860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:57.591677   13860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:57.596114   13860 out.go:177] 
	W0327 14:08:57.599070   13860 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 14:08:57.599086   13860 out.go:239] * 
	W0327 14:08:57.600244   13860 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:08:57.616070   13860 out.go:177] 
	I0327 14:08:56.999876   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:56.999919   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:09:02.002068   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:09:02.002167   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:09:02.015635   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:09:02.015709   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:09:02.025650   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:09:02.025715   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:09:02.036382   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:09:02.036459   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:09:02.047035   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:09:02.047102   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:09:02.057398   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:09:02.057467   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:09:02.068182   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:09:02.068262   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:09:02.078420   14042 logs.go:276] 0 containers: []
	W0327 14:09:02.078436   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:09:02.078497   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:09:02.089014   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:09:02.089035   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:09:02.089040   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:09:02.093856   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:09:02.093864   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:09:02.108192   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:09:02.108206   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:09:02.120129   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:09:02.120141   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:09:02.132339   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:09:02.132353   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:09:02.149732   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:09:02.149744   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:09:02.161481   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:09:02.161494   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:09:02.179275   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:09:02.179286   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:09:02.190297   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:09:02.190308   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:09:02.214955   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:09:02.214962   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:09:02.247761   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:02.247853   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:02.249301   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:09:02.249305   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:09:02.288766   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:09:02.288778   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:09:02.303243   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:09:02.303257   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:09:02.316908   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:02.316920   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:09:02.316944   14042 out.go:239] X Problems detected in kubelet:
	W0327 14:09:02.316948   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:02.316952   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:02.316956   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:02.316958   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-03-27 20:59:50 UTC, ends at Wed 2024-03-27 21:09:13 UTC. --
	Mar 27 21:08:58 running-upgrade-823000 dockerd[3242]: time="2024-03-27T21:08:58.046604089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 21:08:58 running-upgrade-823000 dockerd[3242]: time="2024-03-27T21:08:58.046670087Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/40b10db46d0bf72ac75382614c13817d76c6c52a83ee1a1cb89eeeddec24bb2d pid=18617 runtime=io.containerd.runc.v2
	Mar 27 21:08:58 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:08:58Z" level=error msg="ContainerStats resp: {0x400087c7c0 linux}"
	Mar 27 21:08:58 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:08:58Z" level=error msg="ContainerStats resp: {0x400087d3c0 linux}"
	Mar 27 21:08:59 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:08:59Z" level=error msg="ContainerStats resp: {0x400087a080 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x400076b740 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x400076b900 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x400087b680 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x4000930240 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x40004f6080 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x40004f6780 linux}"
	Mar 27 21:09:00 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:00Z" level=error msg="ContainerStats resp: {0x40009302c0 linux}"
	Mar 27 21:09:02 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 21:09:07 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 21:09:10 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:10Z" level=error msg="ContainerStats resp: {0x40000b8540 linux}"
	Mar 27 21:09:10 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:10Z" level=error msg="ContainerStats resp: {0x40005a2640 linux}"
	Mar 27 21:09:11 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:11Z" level=error msg="ContainerStats resp: {0x40009c2040 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x40005a3380 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x40009c3280 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x40004f6bc0 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x40009c3a80 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x4000930100 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x40009302c0 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=error msg="ContainerStats resp: {0x4000930a40 linux}"
	Mar 27 21:09:12 running-upgrade-823000 cri-dockerd[3083]: time="2024-03-27T21:09:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	40b10db46d0bf       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   3832e248e5936
	a6e26afaf6e0d       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   00f79853c9221
	5be8df5cf8a83       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3832e248e5936
	a67fef37c8eee       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   00f79853c9221
	fc33b5c2587aa       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f3ad85889fafa
	b6bf476a5fbce       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   104ccec815b7c
	7ecf48f350a3d       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   a7c15c3edbefb
	caa62321834f4       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   fa89e5816a6fa
	5c1c6d6f56bff       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   98c1f6e59483d
	f021c0b0f4041       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   2e56698feb0a5
	
	
	==> coredns [40b10db46d0b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 235094070974978068.6732385791516225027. HINFO: read udp 10.244.0.2:57330->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 235094070974978068.6732385791516225027. HINFO: read udp 10.244.0.2:39818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 235094070974978068.6732385791516225027. HINFO: read udp 10.244.0.2:47684->10.0.2.3:53: i/o timeout
	
	
	==> coredns [5be8df5cf8a8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:40041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:56782->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:45801->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:35724->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:53710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:36337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:55762->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:42008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:51493->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8972364330644411845.8127340053383970078. HINFO: read udp 10.244.0.2:56197->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a67fef37c8ee] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:45564->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:40010->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:48693->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:49274->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:52369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:49170->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:45514->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:48397->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8174484772130827254.4491947531944889940. HINFO: read udp 10.244.0.3:40310->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6e26afaf6e0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1588618398294232690.5683509817258642695. HINFO: read udp 10.244.0.3:45450->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1588618398294232690.5683509817258642695. HINFO: read udp 10.244.0.3:53686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1588618398294232690.5683509817258642695. HINFO: read udp 10.244.0.3:44569->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-823000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-823000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873
	                    minikube.k8s.io/name=running-upgrade-823000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T14_04_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 21:04:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-823000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 21:09:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 21:04:56 +0000   Wed, 27 Mar 2024 21:04:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 21:04:56 +0000   Wed, 27 Mar 2024 21:04:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 21:04:56 +0000   Wed, 27 Mar 2024 21:04:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 21:04:56 +0000   Wed, 27 Mar 2024 21:04:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-823000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e3d95cc78864f37a5aa7c4511148703
	  System UUID:                1e3d95cc78864f37a5aa7c4511148703
	  Boot ID:                    fa38b47f-18ec-4f0a-ac1c-ac3588bf516b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5lqnw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-xmszk                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-823000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-apiserver-running-upgrade-823000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-823000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-8ncxx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-823000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-823000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-823000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-823000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-823000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-823000 event: Registered Node running-upgrade-823000 in Controller
	
	
	==> dmesg <==
	[  +1.636025] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.066959] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.065537] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.137556] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085599] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.079313] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.423214] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +14.179099] systemd-fstab-generator[1944]: Ignoring "noauto" for root device
	[  +2.522967] systemd-fstab-generator[2223]: Ignoring "noauto" for root device
	[  +0.144669] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.090990] systemd-fstab-generator[2270]: Ignoring "noauto" for root device
	[  +0.097669] systemd-fstab-generator[2285]: Ignoring "noauto" for root device
	[ +12.716925] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.198693] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.067161] systemd-fstab-generator[3051]: Ignoring "noauto" for root device
	[  +0.065066] systemd-fstab-generator[3062]: Ignoring "noauto" for root device
	[  +0.073447] systemd-fstab-generator[3076]: Ignoring "noauto" for root device
	[  +1.942401] systemd-fstab-generator[3229]: Ignoring "noauto" for root device
	[  +5.457278] systemd-fstab-generator[3599]: Ignoring "noauto" for root device
	[  +1.118819] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[Mar27 21:01] kauditd_printk_skb: 68 callbacks suppressed
	[Mar27 21:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.499712] systemd-fstab-generator[11957]: Ignoring "noauto" for root device
	[  +5.619595] systemd-fstab-generator[12564]: Ignoring "noauto" for root device
	[  +0.463745] systemd-fstab-generator[12695]: Ignoring "noauto" for root device
	
	
	==> etcd [7ecf48f350a3] <==
	{"level":"info","ts":"2024-03-27T21:04:52.352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-27T21:04:52.352Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-27T21:04:52.370Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T21:04:52.370Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T21:04:52.370Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T21:04:52.370Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T21:04:52.370Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T21:04:52.520Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-823000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T21:04:52.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T21:04:52.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T21:04:52.521Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T21:04:52.528Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T21:04:52.524Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:09:13 up 9 min,  0 users,  load average: 0.03, 0.18, 0.15
	Linux running-upgrade-823000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5c1c6d6f56bf] <==
	I0327 21:04:53.925930       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0327 21:04:53.926864       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 21:04:53.927932       1 cache.go:39] Caches are synced for autoregister controller
	I0327 21:04:53.928742       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 21:04:53.929721       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0327 21:04:53.943055       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0327 21:04:53.977699       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0327 21:04:54.668106       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0327 21:04:54.834146       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0327 21:04:54.837268       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0327 21:04:54.837287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 21:04:54.966976       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 21:04:54.978258       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 21:04:55.017004       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0327 21:04:55.019046       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0327 21:04:55.019468       1 controller.go:611] quota admission added evaluator for: endpoints
	I0327 21:04:55.020708       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 21:04:56.004450       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0327 21:04:56.656599       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0327 21:04:56.660020       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0327 21:04:56.697919       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0327 21:04:56.714028       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 21:05:09.520492       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0327 21:05:09.768967       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0327 21:05:10.037878       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [caa62321834f] <==
	I0327 21:05:09.483068       1 range_allocator.go:374] Set node running-upgrade-823000 PodCIDR to [10.244.0.0/24]
	I0327 21:05:09.494224       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0327 21:05:09.494477       1 shared_informer.go:262] Caches are synced for taint
	I0327 21:05:09.494562       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0327 21:05:09.494612       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-823000. Assuming now as a timestamp.
	I0327 21:05:09.494674       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0327 21:05:09.494838       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0327 21:05:09.494962       1 event.go:294] "Event occurred" object="running-upgrade-823000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-823000 event: Registered Node running-upgrade-823000 in Controller"
	I0327 21:05:09.504045       1 shared_informer.go:262] Caches are synced for daemon sets
	I0327 21:05:09.505816       1 shared_informer.go:262] Caches are synced for TTL
	I0327 21:05:09.507512       1 shared_informer.go:262] Caches are synced for GC
	I0327 21:05:09.507536       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0327 21:05:09.509979       1 shared_informer.go:262] Caches are synced for deployment
	I0327 21:05:09.519022       1 shared_informer.go:262] Caches are synced for attach detach
	I0327 21:05:09.523036       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8ncxx"
	I0327 21:05:09.596587       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 21:05:09.609852       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 21:05:09.614410       1 shared_informer.go:262] Caches are synced for disruption
	I0327 21:05:09.614431       1 disruption.go:371] Sending events to api server.
	I0327 21:05:09.770275       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0327 21:05:09.869529       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5lqnw"
	I0327 21:05:09.873344       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xmszk"
	I0327 21:05:10.015818       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 21:05:10.087687       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 21:05:10.087720       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [b6bf476a5fbc] <==
	I0327 21:05:10.021423       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0327 21:05:10.021446       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0327 21:05:10.021455       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0327 21:05:10.034979       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0327 21:05:10.034992       1 server_others.go:206] "Using iptables Proxier"
	I0327 21:05:10.035012       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0327 21:05:10.035106       1 server.go:661] "Version info" version="v1.24.1"
	I0327 21:05:10.035114       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 21:05:10.036121       1 config.go:317] "Starting service config controller"
	I0327 21:05:10.036126       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0327 21:05:10.036135       1 config.go:226] "Starting endpoint slice config controller"
	I0327 21:05:10.036137       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0327 21:05:10.037290       1 config.go:444] "Starting node config controller"
	I0327 21:05:10.037294       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0327 21:05:10.137434       1 shared_informer.go:262] Caches are synced for node config
	I0327 21:05:10.137434       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0327 21:05:10.137459       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [f021c0b0f404] <==
	W0327 21:04:53.896992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 21:04:53.897009       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 21:04:53.897052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 21:04:53.897072       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 21:04:53.897101       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 21:04:53.897132       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 21:04:53.897164       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 21:04:53.897182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 21:04:53.897224       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 21:04:53.897244       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 21:04:53.897270       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 21:04:53.897303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 21:04:53.897363       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 21:04:53.897394       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 21:04:54.813451       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 21:04:54.813506       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 21:04:54.830188       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 21:04:54.830330       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 21:04:54.876486       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 21:04:54.876501       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 21:04:54.891958       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 21:04:54.892036       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 21:04:54.927584       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 21:04:54.927678       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0327 21:04:56.885138       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-03-27 20:59:50 UTC, ends at Wed 2024-03-27 21:09:14 UTC. --
	Mar 27 21:04:57 running-upgrade-823000 kubelet[12570]: E0327 21:04:57.289507   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-823000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-823000"
	Mar 27 21:04:57 running-upgrade-823000 kubelet[12570]: I0327 21:04:57.692325   12570 apiserver.go:52] "Watching apiserver"
	Mar 27 21:04:58 running-upgrade-823000 kubelet[12570]: I0327 21:04:58.124075   12570 reconciler.go:157] "Reconciler: start to sync state"
	Mar 27 21:04:58 running-upgrade-823000 kubelet[12570]: E0327 21:04:58.289645   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-823000\" already exists" pod="kube-system/etcd-running-upgrade-823000"
	Mar 27 21:04:58 running-upgrade-823000 kubelet[12570]: E0327 21:04:58.493451   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-823000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-823000"
	Mar 27 21:04:58 running-upgrade-823000 kubelet[12570]: E0327 21:04:58.689170   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-823000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-823000"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.500964   12570 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.506483   12570 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.506978   12570 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.524871   12570 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.607283   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8fa897fc-2443-4c9b-89d1-31bb9303d2c3-tmp\") pod \"storage-provisioner\" (UID: \"8fa897fc-2443-4c9b-89d1-31bb9303d2c3\") " pod="kube-system/storage-provisioner"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.607383   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6pc\" (UniqueName: \"kubernetes.io/projected/8fa897fc-2443-4c9b-89d1-31bb9303d2c3-kube-api-access-2w6pc\") pod \"storage-provisioner\" (UID: \"8fa897fc-2443-4c9b-89d1-31bb9303d2c3\") " pod="kube-system/storage-provisioner"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.707972   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/254bf42d-ce58-4719-b215-644e9d8854a8-xtables-lock\") pod \"kube-proxy-8ncxx\" (UID: \"254bf42d-ce58-4719-b215-644e9d8854a8\") " pod="kube-system/kube-proxy-8ncxx"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.708076   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/254bf42d-ce58-4719-b215-644e9d8854a8-lib-modules\") pod \"kube-proxy-8ncxx\" (UID: \"254bf42d-ce58-4719-b215-644e9d8854a8\") " pod="kube-system/kube-proxy-8ncxx"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.708086   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/254bf42d-ce58-4719-b215-644e9d8854a8-kube-proxy\") pod \"kube-proxy-8ncxx\" (UID: \"254bf42d-ce58-4719-b215-644e9d8854a8\") " pod="kube-system/kube-proxy-8ncxx"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.708101   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztzc\" (UniqueName: \"kubernetes.io/projected/254bf42d-ce58-4719-b215-644e9d8854a8-kube-api-access-zztzc\") pod \"kube-proxy-8ncxx\" (UID: \"254bf42d-ce58-4719-b215-644e9d8854a8\") " pod="kube-system/kube-proxy-8ncxx"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.874151   12570 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 21:05:09 running-upgrade-823000 kubelet[12570]: I0327 21:05:09.877765   12570 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 21:05:10 running-upgrade-823000 kubelet[12570]: I0327 21:05:10.010223   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbc935c-2134-4994-aaf1-3c2aed645085-config-volume\") pod \"coredns-6d4b75cb6d-xmszk\" (UID: \"ebbc935c-2134-4994-aaf1-3c2aed645085\") " pod="kube-system/coredns-6d4b75cb6d-xmszk"
	Mar 27 21:05:10 running-upgrade-823000 kubelet[12570]: I0327 21:05:10.010285   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m567p\" (UniqueName: \"kubernetes.io/projected/ebbc935c-2134-4994-aaf1-3c2aed645085-kube-api-access-m567p\") pod \"coredns-6d4b75cb6d-xmszk\" (UID: \"ebbc935c-2134-4994-aaf1-3c2aed645085\") " pod="kube-system/coredns-6d4b75cb6d-xmszk"
	Mar 27 21:05:10 running-upgrade-823000 kubelet[12570]: I0327 21:05:10.010302   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lg8j\" (UniqueName: \"kubernetes.io/projected/43051f29-fff0-493e-8b0c-8fbee128a596-kube-api-access-6lg8j\") pod \"coredns-6d4b75cb6d-5lqnw\" (UID: \"43051f29-fff0-493e-8b0c-8fbee128a596\") " pod="kube-system/coredns-6d4b75cb6d-5lqnw"
	Mar 27 21:05:10 running-upgrade-823000 kubelet[12570]: I0327 21:05:10.010314   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43051f29-fff0-493e-8b0c-8fbee128a596-config-volume\") pod \"coredns-6d4b75cb6d-5lqnw\" (UID: \"43051f29-fff0-493e-8b0c-8fbee128a596\") " pod="kube-system/coredns-6d4b75cb6d-5lqnw"
	Mar 27 21:05:10 running-upgrade-823000 kubelet[12570]: I0327 21:05:10.966658   12570 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3832e248e59367ebbe2572fde52aceff5a6574e80a2c8c2078f0cd228f0e5048"
	Mar 27 21:08:58 running-upgrade-823000 kubelet[12570]: I0327 21:08:58.237309   12570 scope.go:110] "RemoveContainer" containerID="3db4399ee448386c3a47e7947d5d9b3063ed8576b92c6a429924ba712f6212df"
	Mar 27 21:08:58 running-upgrade-823000 kubelet[12570]: I0327 21:08:58.248073   12570 scope.go:110] "RemoveContainer" containerID="7bac344b8241d72c1f0b5a6c58541391c2f5a9f60c5de857845470cb60b33475"
	
	
	==> storage-provisioner [fc33b5c2587a] <==
	I0327 21:05:10.042950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 21:05:10.047957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 21:05:10.047983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 21:05:10.051144       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 21:05:10.051268       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-823000_409466be-106b-4f12-b0fb-8a9985a1e7ea!
	I0327 21:05:10.051308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2aec1e8f-7b52-466a-9344-63f1aa117cfb", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-823000_409466be-106b-4f12-b0fb-8a9985a1e7ea became leader
	I0327 21:05:10.152490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-823000_409466be-106b-4f12-b0fb-8a9985a1e7ea!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-823000 -n running-upgrade-823000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-823000 -n running-upgrade-823000: exit status 2 (15.73287125s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-823000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-823000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-823000: (2.280389125s)
--- FAIL: TestRunningBinaryUpgrade (635.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.836676792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-524000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-524000" primary control-plane node in "kubernetes-upgrade-524000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-524000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:01:56.813371   13953 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:01:56.813555   13953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:01:56.813558   13953 out.go:304] Setting ErrFile to fd 2...
	I0327 14:01:56.813561   13953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:01:56.813696   13953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:01:56.814943   13953 out.go:298] Setting JSON to false
	I0327 14:01:56.832331   13953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7286,"bootTime":1711566030,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:01:56.832394   13953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:01:56.838621   13953 out.go:177] * [kubernetes-upgrade-524000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:01:56.854565   13953 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:01:56.847459   13953 notify.go:220] Checking for updates...
	I0327 14:01:56.866429   13953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:01:56.875479   13953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:01:56.884409   13953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:01:56.891269   13953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:01:56.895388   13953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:01:56.898805   13953 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:01:56.898875   13953 config.go:182] Loaded profile config "running-upgrade-823000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:01:56.898920   13953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:01:56.902346   13953 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:01:56.909411   13953 start.go:297] selected driver: qemu2
	I0327 14:01:56.909417   13953 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:01:56.909424   13953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:01:56.911957   13953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:01:56.913609   13953 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:01:56.916575   13953 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 14:01:56.916618   13953 cni.go:84] Creating CNI manager for ""
	I0327 14:01:56.916625   13953 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 14:01:56.916664   13953 start.go:340] cluster config:
	{Name:kubernetes-upgrade-524000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-524000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:01:56.921483   13953 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:01:56.930392   13953 out.go:177] * Starting "kubernetes-upgrade-524000" primary control-plane node in "kubernetes-upgrade-524000" cluster
	I0327 14:01:56.934469   13953 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 14:01:56.934492   13953 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 14:01:56.934502   13953 cache.go:56] Caching tarball of preloaded images
	I0327 14:01:56.934564   13953 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:01:56.934570   13953 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 14:01:56.934631   13953 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kubernetes-upgrade-524000/config.json ...
	I0327 14:01:56.934643   13953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kubernetes-upgrade-524000/config.json: {Name:mk249d78db3834f1f797ed01235e0f4ae9e6ef11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:01:56.935003   13953 start.go:360] acquireMachinesLock for kubernetes-upgrade-524000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:01:56.935047   13953 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "kubernetes-upgrade-524000"
	I0327 14:01:56.935062   13953 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Cluste
rName:kubernetes-upgrade-524000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:01:56.935100   13953 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:01:56.943417   13953 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:01:56.971113   13953 start.go:159] libmachine.API.Create for "kubernetes-upgrade-524000" (driver="qemu2")
	I0327 14:01:56.971149   13953 client.go:168] LocalClient.Create starting
	I0327 14:01:56.971225   13953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:01:56.971253   13953 main.go:141] libmachine: Decoding PEM data...
	I0327 14:01:56.971263   13953 main.go:141] libmachine: Parsing certificate...
	I0327 14:01:56.971308   13953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:01:56.971329   13953 main.go:141] libmachine: Decoding PEM data...
	I0327 14:01:56.971336   13953 main.go:141] libmachine: Parsing certificate...
	I0327 14:01:56.971640   13953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:01:57.114532   13953 main.go:141] libmachine: Creating SSH key...
	I0327 14:01:57.203988   13953 main.go:141] libmachine: Creating Disk image...
	I0327 14:01:57.203997   13953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:01:57.204145   13953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:01:57.217276   13953 main.go:141] libmachine: STDOUT: 
	I0327 14:01:57.217300   13953 main.go:141] libmachine: STDERR: 
	I0327 14:01:57.217355   13953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2 +20000M
	I0327 14:01:57.228210   13953 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:01:57.228230   13953 main.go:141] libmachine: STDERR: 
	I0327 14:01:57.228246   13953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:01:57.228256   13953 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:01:57.228290   13953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:32:b0:5a:21:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:01:57.230119   13953 main.go:141] libmachine: STDOUT: 
	I0327 14:01:57.230137   13953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:01:57.230158   13953 client.go:171] duration metric: took 259.006208ms to LocalClient.Create
	I0327 14:01:59.230057   13953 start.go:128] duration metric: took 2.295295458s to createHost
	I0327 14:01:59.230193   13953 start.go:83] releasing machines lock for "kubernetes-upgrade-524000", held for 2.295468458s
	W0327 14:01:59.230283   13953 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:01:59.244719   13953 out.go:177] * Deleting "kubernetes-upgrade-524000" in qemu2 ...
	W0327 14:01:59.271465   13953 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:01:59.271507   13953 start.go:728] Will try again in 5 seconds ...
	I0327 14:02:04.272632   13953 start.go:360] acquireMachinesLock for kubernetes-upgrade-524000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:02:04.273030   13953 start.go:364] duration metric: took 324.833µs to acquireMachinesLock for "kubernetes-upgrade-524000"
	I0327 14:02:04.273117   13953 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Cluste
rName:kubernetes-upgrade-524000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:02:04.273351   13953 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:02:04.282475   13953 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:02:04.323777   13953 start.go:159] libmachine.API.Create for "kubernetes-upgrade-524000" (driver="qemu2")
	I0327 14:02:04.323833   13953 client.go:168] LocalClient.Create starting
	I0327 14:02:04.323924   13953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:02:04.323991   13953 main.go:141] libmachine: Decoding PEM data...
	I0327 14:02:04.324006   13953 main.go:141] libmachine: Parsing certificate...
	I0327 14:02:04.324074   13953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:02:04.324109   13953 main.go:141] libmachine: Decoding PEM data...
	I0327 14:02:04.324121   13953 main.go:141] libmachine: Parsing certificate...
	I0327 14:02:04.324631   13953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:02:04.469961   13953 main.go:141] libmachine: Creating SSH key...
	I0327 14:02:04.549994   13953 main.go:141] libmachine: Creating Disk image...
	I0327 14:02:04.550001   13953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:02:04.550203   13953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:02:04.563755   13953 main.go:141] libmachine: STDOUT: 
	I0327 14:02:04.563798   13953 main.go:141] libmachine: STDERR: 
	I0327 14:02:04.563868   13953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2 +20000M
	I0327 14:02:04.575439   13953 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:02:04.575460   13953 main.go:141] libmachine: STDERR: 
	I0327 14:02:04.575483   13953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:02:04.575489   13953 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:02:04.575524   13953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:15:6b:13:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:02:04.577420   13953 main.go:141] libmachine: STDOUT: 
	I0327 14:02:04.577437   13953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:02:04.577455   13953 client.go:171] duration metric: took 253.664625ms to LocalClient.Create
	I0327 14:02:06.579297   13953 start.go:128] duration metric: took 2.306269708s to createHost
	I0327 14:02:06.579334   13953 start.go:83] releasing machines lock for "kubernetes-upgrade-524000", held for 2.30668275s
	W0327 14:02:06.579546   13953 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:02:06.592936   13953 out.go:177] 
	W0327 14:02:06.595975   13953 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:02:06.595988   13953 out.go:239] * 
	* 
	W0327 14:02:06.597032   13953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:02:06.608866   13953 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-524000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-524000: (3.433429291s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-524000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-524000 status --format={{.Host}}: exit status 7 (54.8925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182930458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-524000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-524000" primary control-plane node in "kubernetes-upgrade-524000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-524000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-524000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:02:10.138872   13989 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:02:10.139012   13989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:02:10.139015   13989 out.go:304] Setting ErrFile to fd 2...
	I0327 14:02:10.139018   13989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:02:10.139138   13989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:02:10.140212   13989 out.go:298] Setting JSON to false
	I0327 14:02:10.157289   13989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7300,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:02:10.157349   13989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:02:10.162407   13989 out.go:177] * [kubernetes-upgrade-524000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:02:10.168384   13989 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:02:10.168467   13989 notify.go:220] Checking for updates...
	I0327 14:02:10.171432   13989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:02:10.174475   13989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:02:10.177417   13989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:02:10.180430   13989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:02:10.183427   13989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:02:10.186655   13989 config.go:182] Loaded profile config "kubernetes-upgrade-524000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 14:02:10.186909   13989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:02:10.191379   13989 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:02:10.198369   13989 start.go:297] selected driver: qemu2
	I0327 14:02:10.198376   13989 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNa
me:kubernetes-upgrade-524000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:02:10.198445   13989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:02:10.200959   13989 cni.go:84] Creating CNI manager for ""
	I0327 14:02:10.200978   13989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:02:10.201005   13989 start.go:340] cluster config:
	{Name:kubernetes-upgrade-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-524000 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmn
et/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:02:10.205166   13989 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:02:10.212369   13989 out.go:177] * Starting "kubernetes-upgrade-524000" primary control-plane node in "kubernetes-upgrade-524000" cluster
	I0327 14:02:10.216321   13989 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 14:02:10.216336   13989 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 14:02:10.216347   13989 cache.go:56] Caching tarball of preloaded images
	I0327 14:02:10.216405   13989 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:02:10.216411   13989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 14:02:10.216464   13989 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kubernetes-upgrade-524000/config.json ...
	I0327 14:02:10.216806   13989 start.go:360] acquireMachinesLock for kubernetes-upgrade-524000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:02:10.216832   13989 start.go:364] duration metric: took 20.5µs to acquireMachinesLock for "kubernetes-upgrade-524000"
	I0327 14:02:10.216842   13989 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:02:10.216847   13989 fix.go:54] fixHost starting: 
	I0327 14:02:10.216964   13989 fix.go:112] recreateIfNeeded on kubernetes-upgrade-524000: state=Stopped err=<nil>
	W0327 14:02:10.216972   13989 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:02:10.224405   13989 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-524000" ...
	I0327 14:02:10.228406   13989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:15:6b:13:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:02:10.230408   13989 main.go:141] libmachine: STDOUT: 
	I0327 14:02:10.230433   13989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:02:10.230467   13989 fix.go:56] duration metric: took 13.621125ms for fixHost
	I0327 14:02:10.230471   13989 start.go:83] releasing machines lock for "kubernetes-upgrade-524000", held for 13.635875ms
	W0327 14:02:10.230477   13989 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:02:10.230503   13989 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:02:10.230508   13989 start.go:728] Will try again in 5 seconds ...
	I0327 14:02:15.232083   13989 start.go:360] acquireMachinesLock for kubernetes-upgrade-524000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:02:15.232597   13989 start.go:364] duration metric: took 417.834µs to acquireMachinesLock for "kubernetes-upgrade-524000"
	I0327 14:02:15.232762   13989 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:02:15.232783   13989 fix.go:54] fixHost starting: 
	I0327 14:02:15.233456   13989 fix.go:112] recreateIfNeeded on kubernetes-upgrade-524000: state=Stopped err=<nil>
	W0327 14:02:15.233483   13989 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:02:15.241949   13989 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-524000" ...
	I0327 14:02:15.246069   13989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:15:6b:13:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubernetes-upgrade-524000/disk.qcow2
	I0327 14:02:15.255441   13989 main.go:141] libmachine: STDOUT: 
	I0327 14:02:15.255503   13989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:02:15.255589   13989 fix.go:56] duration metric: took 22.80875ms for fixHost
	I0327 14:02:15.255605   13989 start.go:83] releasing machines lock for "kubernetes-upgrade-524000", held for 22.986792ms
	W0327 14:02:15.255799   13989 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-524000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-524000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:02:15.263921   13989 out.go:177] 
	W0327 14:02:15.266895   13989 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:02:15.266944   13989 out.go:239] * 
	* 
	W0327 14:02:15.268637   13989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:02:15.277869   13989 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-524000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-524000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-524000 version --output=json: exit status 1 (58.197834ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-524000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-27 14:02:15.349813 -0700 PDT m=+1010.860272168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-524000 -n kubernetes-upgrade-524000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-524000 -n kubernetes-upgrade-524000: exit status 7 (34.945042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-524000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-524000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-524000
--- FAIL: TestKubernetesUpgrade (18.71s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.24s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18158
- KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2114789853/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18158
- KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2096798816/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.826698545 start -p stopped-upgrade-077000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.826698545 start -p stopped-upgrade-077000 --memory=2200 --vm-driver=qemu2 : (46.011165875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.826698545 -p stopped-upgrade-077000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.826698545 -p stopped-upgrade-077000 stop: (12.114923208s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-077000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-077000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m47.77510125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-077000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-077000" primary control-plane node in "stopped-upgrade-077000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-077000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:03:18.700900   14042 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:03:18.701048   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:03:18.701052   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:03:18.701054   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:03:18.701222   14042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:03:18.702380   14042 out.go:298] Setting JSON to false
	I0327 14:03:18.721182   14042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7368,"bootTime":1711566030,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:03:18.721253   14042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:03:18.725068   14042 out.go:177] * [stopped-upgrade-077000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:03:18.733894   14042 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:03:18.735480   14042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:03:18.733929   14042 notify.go:220] Checking for updates...
	I0327 14:03:18.738858   14042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:03:18.741931   14042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:03:18.744908   14042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:03:18.747857   14042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:03:18.751207   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:03:18.754876   14042 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 14:03:18.757828   14042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:03:18.761837   14042 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:03:18.768845   14042 start.go:297] selected driver: qemu2
	I0327 14:03:18.768853   14042 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:18.768899   14042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:03:18.771501   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:03:18.771519   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:03:18.771548   14042 start.go:340] cluster config:
	{Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:18.771609   14042 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:03:18.778704   14042 out.go:177] * Starting "stopped-upgrade-077000" primary control-plane node in "stopped-upgrade-077000" cluster
	I0327 14:03:18.782839   14042 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:03:18.782853   14042 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 14:03:18.782859   14042 cache.go:56] Caching tarball of preloaded images
	I0327 14:03:18.782905   14042 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:03:18.782911   14042 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 14:03:18.782954   14042 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/config.json ...
	I0327 14:03:18.783339   14042 start.go:360] acquireMachinesLock for stopped-upgrade-077000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:03:18.783368   14042 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "stopped-upgrade-077000"
	I0327 14:03:18.783379   14042 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:03:18.783383   14042 fix.go:54] fixHost starting: 
	I0327 14:03:18.783497   14042 fix.go:112] recreateIfNeeded on stopped-upgrade-077000: state=Stopped err=<nil>
	W0327 14:03:18.783506   14042 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:03:18.790845   14042 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-077000" ...
	I0327 14:03:18.794936   14042 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52464-:22,hostfwd=tcp::52465-:2376,hostname=stopped-upgrade-077000 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/disk.qcow2
	I0327 14:03:18.843091   14042 main.go:141] libmachine: STDOUT: 
	I0327 14:03:18.843129   14042 main.go:141] libmachine: STDERR: 
	I0327 14:03:18.843134   14042 main.go:141] libmachine: Waiting for VM to start (ssh -p 52464 docker@127.0.0.1)...
	I0327 14:03:39.251049   14042 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/config.json ...
	I0327 14:03:39.251684   14042 machine.go:94] provisionDockerMachine start ...
	I0327 14:03:39.251849   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.252254   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.252266   14042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 14:03:39.342064   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 14:03:39.342089   14042 buildroot.go:166] provisioning hostname "stopped-upgrade-077000"
	I0327 14:03:39.342156   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.342359   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.342368   14042 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-077000 && echo "stopped-upgrade-077000" | sudo tee /etc/hostname
	I0327 14:03:39.424644   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-077000
	
	I0327 14:03:39.424706   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.424844   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.424856   14042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-077000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-077000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-077000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 14:03:39.501918   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 14:03:39.501933   14042 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18158-11341/.minikube CaCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18158-11341/.minikube}
	I0327 14:03:39.501944   14042 buildroot.go:174] setting up certificates
	I0327 14:03:39.501955   14042 provision.go:84] configureAuth start
	I0327 14:03:39.501965   14042 provision.go:143] copyHostCerts
	I0327 14:03:39.502084   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem, removing ...
	I0327 14:03:39.502093   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem
	I0327 14:03:39.502255   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/cert.pem (1123 bytes)
	I0327 14:03:39.502543   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem, removing ...
	I0327 14:03:39.502548   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem
	I0327 14:03:39.502645   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/key.pem (1675 bytes)
	I0327 14:03:39.502846   14042 exec_runner.go:144] found /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem, removing ...
	I0327 14:03:39.502851   14042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem
	I0327 14:03:39.502933   14042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.pem (1078 bytes)
	I0327 14:03:39.503061   14042 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-077000 san=[127.0.0.1 localhost minikube stopped-upgrade-077000]
	I0327 14:03:39.606406   14042 provision.go:177] copyRemoteCerts
	I0327 14:03:39.606453   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 14:03:39.606462   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:39.644033   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 14:03:39.651207   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 14:03:39.658128   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 14:03:39.664557   14042 provision.go:87] duration metric: took 162.596125ms to configureAuth
	I0327 14:03:39.664566   14042 buildroot.go:189] setting minikube options for container-runtime
	I0327 14:03:39.664658   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:03:39.664694   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.664781   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.664786   14042 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 14:03:39.732730   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 14:03:39.732738   14042 buildroot.go:70] root file system type: tmpfs
	I0327 14:03:39.732798   14042 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 14:03:39.732845   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.732949   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.732984   14042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 14:03:39.808429   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 14:03:39.808482   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:39.808600   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:39.808611   14042 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 14:03:40.171752   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 14:03:40.171766   14042 machine.go:97] duration metric: took 920.086083ms to provisionDockerMachine
	I0327 14:03:40.171772   14042 start.go:293] postStartSetup for "stopped-upgrade-077000" (driver="qemu2")
	I0327 14:03:40.171779   14042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 14:03:40.171849   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 14:03:40.171859   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:40.208203   14042 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 14:03:40.209486   14042 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 14:03:40.209492   14042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/addons for local assets ...
	I0327 14:03:40.209571   14042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18158-11341/.minikube/files for local assets ...
	I0327 14:03:40.209686   14042 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem -> 117522.pem in /etc/ssl/certs
	I0327 14:03:40.209816   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 14:03:40.212356   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:03:40.219427   14042 start.go:296] duration metric: took 47.650459ms for postStartSetup
	I0327 14:03:40.219441   14042 fix.go:56] duration metric: took 21.436376125s for fixHost
	I0327 14:03:40.219474   14042 main.go:141] libmachine: Using SSH client type: native
	I0327 14:03:40.219577   14042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030d1bf0] 0x1030d4450 <nil>  [] 0s} localhost 52464 <nil> <nil>}
	I0327 14:03:40.219582   14042 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 14:03:40.286254   14042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711573420.354892254
	
	I0327 14:03:40.286261   14042 fix.go:216] guest clock: 1711573420.354892254
	I0327 14:03:40.286265   14042 fix.go:229] Guest: 2024-03-27 14:03:40.354892254 -0700 PDT Remote: 2024-03-27 14:03:40.219443 -0700 PDT m=+21.552879251 (delta=135.449254ms)
	I0327 14:03:40.286277   14042 fix.go:200] guest clock delta is within tolerance: 135.449254ms
	I0327 14:03:40.286280   14042 start.go:83] releasing machines lock for "stopped-upgrade-077000", held for 21.503226083s
	I0327 14:03:40.286345   14042 ssh_runner.go:195] Run: cat /version.json
	I0327 14:03:40.286346   14042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 14:03:40.286354   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:03:40.286365   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	W0327 14:03:40.286950   14042 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52464: connect: connection refused
	I0327 14:03:40.286981   14042 retry.go:31] will retry after 348.367625ms: dial tcp [::1]:52464: connect: connection refused
	W0327 14:03:40.693144   14042 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 14:03:40.693268   14042 ssh_runner.go:195] Run: systemctl --version
	I0327 14:03:40.696781   14042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 14:03:40.700120   14042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 14:03:40.700170   14042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 14:03:40.705493   14042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 14:03:40.712620   14042 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 14:03:40.712630   14042 start.go:494] detecting cgroup driver to use...
	I0327 14:03:40.712721   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:03:40.722690   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 14:03:40.727062   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 14:03:40.730942   14042 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 14:03:40.730974   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 14:03:40.734588   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:03:40.737789   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 14:03:40.740808   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 14:03:40.744220   14042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 14:03:40.747809   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 14:03:40.751337   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 14:03:40.754522   14042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 14:03:40.757363   14042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 14:03:40.760288   14042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 14:03:40.763565   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:40.838581   14042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 14:03:40.844370   14042 start.go:494] detecting cgroup driver to use...
	I0327 14:03:40.844435   14042 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 14:03:40.849985   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:03:40.854563   14042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 14:03:40.863713   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 14:03:40.868725   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 14:03:40.873623   14042 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 14:03:40.931859   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 14:03:40.938691   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 14:03:40.945525   14042 ssh_runner.go:195] Run: which cri-dockerd
	I0327 14:03:40.946848   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 14:03:40.949448   14042 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 14:03:40.954367   14042 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 14:03:41.033187   14042 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 14:03:41.119862   14042 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 14:03:41.119930   14042 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 14:03:41.125337   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:41.202511   14042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:03:42.365150   14042 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162638125s)
	I0327 14:03:42.365206   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 14:03:42.369664   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:03:42.373794   14042 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 14:03:42.448714   14042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 14:03:42.533753   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:42.613889   14042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 14:03:42.619853   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 14:03:42.624676   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:42.706939   14042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 14:03:42.745123   14042 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 14:03:42.745200   14042 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 14:03:42.747373   14042 start.go:562] Will wait 60s for crictl version
	I0327 14:03:42.747419   14042 ssh_runner.go:195] Run: which crictl
	I0327 14:03:42.748838   14042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 14:03:42.764368   14042 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 14:03:42.764440   14042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:03:42.783608   14042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 14:03:42.804056   14042 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 14:03:42.804119   14042 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 14:03:42.805501   14042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 14:03:42.809007   14042 kubeadm.go:877] updating cluster {Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 14:03:42.809057   14042 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 14:03:42.809097   14042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:03:42.819682   14042 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:03:42.819691   14042 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:03:42.819738   14042 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:03:42.823486   14042 ssh_runner.go:195] Run: which lz4
	I0327 14:03:42.824732   14042 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 14:03:42.826108   14042 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 14:03:42.826120   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 14:03:43.528704   14042 docker.go:649] duration metric: took 704.008334ms to copy over tarball
	I0327 14:03:43.528766   14042 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 14:03:44.712705   14042 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.183932167s)
	I0327 14:03:44.712718   14042 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 14:03:44.728725   14042 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 14:03:44.732031   14042 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 14:03:44.737320   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:44.815490   14042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 14:03:46.919772   14042 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.104295167s)
	I0327 14:03:46.919864   14042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 14:03:46.933273   14042 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 14:03:46.933283   14042 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 14:03:46.933288   14042 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 14:03:46.944102   14042 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:46.944103   14042 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:46.944162   14042 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:46.944215   14042 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:46.944360   14042 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:46.944384   14042 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:46.944414   14042 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:46.944422   14042 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 14:03:46.952444   14042 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:46.952515   14042 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 14:03:46.952584   14042 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:46.952594   14042 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:46.952599   14042 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:46.952657   14042 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:46.952696   14042 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:46.952770   14042 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.418847   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.420059   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.422498   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.423475   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0327 14:03:49.425267   14042 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 14:03:49.425426   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.425523   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 14:03:49.433602   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.461293   14042 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 14:03:49.461326   14042 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.461384   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 14:03:49.463129   14042 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 14:03:49.463143   14042 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.463176   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 14:03:49.471939   14042 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 14:03:49.471958   14042 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.472041   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 14:03:49.481404   14042 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 14:03:49.481425   14042 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:49.481495   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 14:03:49.494983   14042 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 14:03:49.495002   14042 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.495054   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 14:03:49.500334   14042 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 14:03:49.500353   14042 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 14:03:49.500398   14042 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 14:03:49.500412   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 14:03:49.500412   14042 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.500436   14042 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 14:03:49.513449   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 14:03:49.513487   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 14:03:49.513494   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 14:03:49.520374   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 14:03:49.523099   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 14:03:49.523202   14042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:03:49.528977   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 14:03:49.529002   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 14:03:49.529014   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 14:03:49.529052   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 14:03:49.529077   14042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 14:03:49.530848   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 14:03:49.530860   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 14:03:49.556571   14042 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 14:03:49.556587   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 14:03:49.596033   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0327 14:03:49.596071   14042 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 14:03:49.596082   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 14:03:49.631077   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0327 14:03:49.940619   14042 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 14:03:49.941157   14042 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:49.988005   14042 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 14:03:49.988045   14042 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:49.988145   14042 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:03:50.012201   14042 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 14:03:50.012337   14042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:03:50.014259   14042 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 14:03:50.014275   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 14:03:50.042416   14042 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 14:03:50.042430   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 14:03:50.278697   14042 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 14:03:50.278734   14042 cache_images.go:92] duration metric: took 3.345487625s to LoadCachedImages
	W0327 14:03:50.278774   14042 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0327 14:03:50.278782   14042 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 14:03:50.278832   14042 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-077000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 14:03:50.278902   14042 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 14:03:50.292826   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:03:50.292837   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:03:50.292841   14042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 14:03:50.292850   14042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-077000 NodeName:stopped-upgrade-077000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 14:03:50.292915   14042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-077000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 14:03:50.292964   14042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 14:03:50.296219   14042 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 14:03:50.296248   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 14:03:50.299324   14042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 14:03:50.304227   14042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 14:03:50.309353   14042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 14:03:50.314234   14042 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 14:03:50.315452   14042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 14:03:50.319498   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:03:50.394731   14042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:03:50.400151   14042 certs.go:68] Setting up /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000 for IP: 10.0.2.15
	I0327 14:03:50.400158   14042 certs.go:194] generating shared ca certs ...
	I0327 14:03:50.400166   14042 certs.go:226] acquiring lock for ca certs: {Name:mkbfc84e619c8d37a470429cb64ebb1efb05c6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.400326   14042 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key
	I0327 14:03:50.401071   14042 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key
	I0327 14:03:50.401077   14042 certs.go:256] generating profile certs ...
	I0327 14:03:50.401364   14042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key
	I0327 14:03:50.401390   14042 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0
	I0327 14:03:50.401402   14042 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 14:03:50.619002   14042 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 ...
	I0327 14:03:50.619018   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0: {Name:mk46c4c69cec8e14adc115c5f9a746ac9de77e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.619331   14042 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0 ...
	I0327 14:03:50.619336   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0: {Name:mka1aecbeaeb70a08aae7fc5ff07a1d2988378fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.619493   14042 certs.go:381] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt.0d31b9d0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt
	I0327 14:03:50.619632   14042 certs.go:385] copying /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key.0d31b9d0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key
	I0327 14:03:50.621315   14042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.key
	I0327 14:03:50.621490   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem (1338 bytes)
	W0327 14:03:50.621685   14042 certs.go:480] ignoring /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752_empty.pem, impossibly tiny 0 bytes
	I0327 14:03:50.621693   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 14:03:50.621718   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem (1078 bytes)
	I0327 14:03:50.621737   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem (1123 bytes)
	I0327 14:03:50.621755   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/key.pem (1675 bytes)
	I0327 14:03:50.621797   14042 certs.go:484] found cert: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem (1708 bytes)
	I0327 14:03:50.622112   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 14:03:50.629498   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 14:03:50.636308   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 14:03:50.643642   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 14:03:50.651123   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 14:03:50.657752   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 14:03:50.664082   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 14:03:50.670499   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 14:03:50.677238   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/ssl/certs/117522.pem --> /usr/share/ca-certificates/117522.pem (1708 bytes)
	I0327 14:03:50.683484   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 14:03:50.690271   14042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/11752.pem --> /usr/share/ca-certificates/11752.pem (1338 bytes)
	I0327 14:03:50.697033   14042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 14:03:50.702282   14042 ssh_runner.go:195] Run: openssl version
	I0327 14:03:50.704104   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117522.pem && ln -fs /usr/share/ca-certificates/117522.pem /etc/ssl/certs/117522.pem"
	I0327 14:03:50.707141   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.708694   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 20:47 /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.708710   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117522.pem
	I0327 14:03:50.710486   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117522.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 14:03:50.713350   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 14:03:50.716334   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.717868   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 21:00 /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.717886   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 14:03:50.719633   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 14:03:50.722696   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11752.pem && ln -fs /usr/share/ca-certificates/11752.pem /etc/ssl/certs/11752.pem"
	I0327 14:03:50.725519   14042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.726826   14042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 20:47 /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.726841   14042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11752.pem
	I0327 14:03:50.728574   14042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11752.pem /etc/ssl/certs/51391683.0"
	I0327 14:03:50.731989   14042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 14:03:50.733560   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 14:03:50.735642   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 14:03:50.737594   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 14:03:50.739600   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 14:03:50.741615   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 14:03:50.743398   14042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 14:03:50.745442   14042 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-077000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 14:03:50.745503   14042 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:03:50.755805   14042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 14:03:50.759124   14042 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 14:03:50.759131   14042 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 14:03:50.759133   14042 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 14:03:50.759155   14042 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 14:03:50.762041   14042 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 14:03:50.762332   14042 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-077000" does not appear in /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:03:50.762430   14042 kubeconfig.go:62] /Users/jenkins/minikube-integration/18158-11341/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-077000" cluster setting kubeconfig missing "stopped-upgrade-077000" context setting]
	I0327 14:03:50.762611   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:03:50.763042   14042 kapi.go:59] client config for stopped-upgrade-077000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043c3020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:03:50.763497   14042 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 14:03:50.766116   14042 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-077000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 14:03:50.766123   14042 kubeadm.go:1154] stopping kube-system containers ...
	I0327 14:03:50.766165   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 14:03:50.777023   14042 docker.go:483] Stopping containers: [5e4db03ec227 048161dfe88e 9d8978a7a14e 7e3614c971ee 497390a10b43 a6edaca08a0a 44fb0f026eb6 6bd655ded881]
	I0327 14:03:50.777087   14042 ssh_runner.go:195] Run: docker stop 5e4db03ec227 048161dfe88e 9d8978a7a14e 7e3614c971ee 497390a10b43 a6edaca08a0a 44fb0f026eb6 6bd655ded881
	I0327 14:03:50.787915   14042 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 14:03:50.793530   14042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:03:50.796448   14042 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:03:50.796460   14042 kubeadm.go:156] found existing configuration files:
	
	I0327 14:03:50.796483   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf
	I0327 14:03:50.799328   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:03:50.799352   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:03:50.802037   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf
	I0327 14:03:50.804309   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:03:50.804331   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:03:50.807439   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf
	I0327 14:03:50.810187   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:03:50.810213   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:03:50.812603   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf
	I0327 14:03:50.815499   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:03:50.815520   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:03:50.818140   14042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:03:50.820811   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:50.842586   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.233965   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.353400   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.376631   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 14:03:51.405030   14042 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:03:51.405107   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:51.907287   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:52.407172   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:03:52.411687   14042 api_server.go:72] duration metric: took 1.00667075s to wait for apiserver process to appear ...
	I0327 14:03:52.411699   14042 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:03:52.411713   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:03:57.413832   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:03:57.413896   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:02.414187   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:02.414210   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:07.414523   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:07.414587   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:12.415070   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:12.415100   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:17.415641   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:17.415660   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:22.416516   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:22.416574   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:27.417452   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:27.417539   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:32.418710   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:32.418756   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:37.420554   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:37.420628   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:42.423304   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:42.423403   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:47.425997   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:47.426145   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:04:52.428694   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:04:52.428862   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:04:52.440180   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:04:52.440265   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:04:52.452167   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:04:52.452233   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:04:52.462872   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:04:52.462946   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:04:52.473381   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:04:52.473462   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:04:52.483806   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:04:52.483869   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:04:52.494925   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:04:52.495005   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:04:52.504715   14042 logs.go:276] 0 containers: []
	W0327 14:04:52.504728   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:04:52.504790   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:04:52.516285   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:04:52.516304   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:04:52.516319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:04:52.535194   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:04:52.535204   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:04:52.547782   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:04:52.547793   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:04:52.562208   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:04:52.562221   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:04:52.589945   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:04:52.589960   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:04:52.610085   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:04:52.610097   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:04:52.622244   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:04:52.622257   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:04:52.645102   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:04:52.645116   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:04:52.657761   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:04:52.657774   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:04:52.698357   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:04:52.698378   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:04:52.713257   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:04:52.713268   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:04:52.738865   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:04:52.738883   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:04:52.751344   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:04:52.751365   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:04:52.764505   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:04:52.764517   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:04:52.768872   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:04:52.768880   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:04:52.893517   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:04:52.893529   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:04:55.410082   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:00.411219   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:00.411438   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:00.441377   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:00.441472   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:00.458905   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:00.458974   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:00.470244   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:00.470304   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:00.480413   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:00.480483   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:00.493407   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:00.493473   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:00.504297   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:00.504362   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:00.514568   14042 logs.go:276] 0 containers: []
	W0327 14:05:00.514581   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:00.514637   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:00.533254   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:00.533282   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:00.533288   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:00.545189   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:00.545200   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:00.558529   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:00.558539   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:00.569812   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:00.569824   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:00.581170   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:00.581181   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:00.592986   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:00.592996   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:00.604999   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:00.605009   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:00.631570   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:00.631588   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:00.670633   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:00.670641   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:00.675314   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:00.675324   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:00.691574   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:00.691583   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:00.729343   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:00.729356   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:00.744321   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:00.744332   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:00.759148   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:00.759159   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:00.784328   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:00.784338   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:00.795940   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:00.795951   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:03.320838   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:08.322815   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:08.323015   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:08.336539   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:08.336614   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:08.347593   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:08.347667   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:08.358123   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:08.358189   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:08.369442   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:08.369512   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:08.379886   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:08.379955   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:08.390441   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:08.390513   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:08.400932   14042 logs.go:276] 0 containers: []
	W0327 14:05:08.400943   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:08.401006   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:08.411474   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:08.411492   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:08.411498   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:08.428953   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:08.428963   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:08.453706   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:08.453713   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:08.467312   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:08.467323   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:08.478336   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:08.478346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:08.489662   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:08.489671   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:08.508014   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:08.508024   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:08.544659   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:08.544670   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:08.569110   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:08.569120   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:08.584398   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:08.584407   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:08.596509   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:08.596521   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:08.611899   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:08.611908   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:08.629531   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:08.629542   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:08.642304   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:08.642316   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:08.658277   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:08.658286   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:08.696492   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:08.696501   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:11.202960   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:16.205111   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:16.205218   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:16.216330   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:16.216396   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:16.226873   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:16.226945   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:16.237214   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:16.237286   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:16.247782   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:16.247851   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:16.258131   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:16.258197   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:16.270172   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:16.270254   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:16.280222   14042 logs.go:276] 0 containers: []
	W0327 14:05:16.280235   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:16.280304   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:16.290731   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:16.290747   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:16.290752   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:16.315152   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:16.315160   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:16.352807   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:16.352815   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:16.366178   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:16.366187   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:16.383293   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:16.383304   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:16.395174   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:16.395184   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:16.406511   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:16.406522   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:16.421130   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:16.421145   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:16.434139   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:16.434158   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:16.471979   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:16.471992   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:16.483894   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:16.483917   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:16.496714   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:16.496727   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:16.510437   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:16.510449   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:16.522178   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:16.522189   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:16.526643   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:16.526651   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:16.540939   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:16.540950   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:19.068595   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:24.071142   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:24.071318   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:24.082983   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:24.083053   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:24.094444   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:24.094514   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:24.105391   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:24.105466   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:24.116831   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:24.116913   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:24.130734   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:24.130803   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:24.141950   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:24.142023   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:24.155258   14042 logs.go:276] 0 containers: []
	W0327 14:05:24.155273   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:24.155333   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:24.166021   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:24.166040   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:24.166046   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:24.181450   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:24.181461   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:24.217890   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:24.217901   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:24.232025   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:24.232034   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:24.242903   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:24.242917   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:24.255153   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:24.255163   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:24.266864   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:24.266876   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:24.271466   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:24.271476   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:24.307241   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:24.307253   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:24.331656   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:24.331669   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:24.348896   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:24.348905   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:24.361035   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:24.361046   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:24.385116   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:24.385123   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:24.396353   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:24.396364   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:24.410197   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:24.410208   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:24.429804   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:24.429816   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:26.943522   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:31.945859   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:31.946245   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:31.978068   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:31.978210   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:31.998370   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:31.998465   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:32.013081   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:32.013185   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:32.025188   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:32.025261   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:32.035694   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:32.035768   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:32.048704   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:32.048773   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:32.058833   14042 logs.go:276] 0 containers: []
	W0327 14:05:32.058845   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:32.058897   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:32.069696   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:32.069716   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:32.069722   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:32.108245   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:32.108257   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:32.123762   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:32.123774   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:32.135985   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:32.135998   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:32.148237   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:32.148248   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:32.159801   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:32.159814   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:32.195608   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:32.195623   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:32.207227   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:32.207238   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:32.221582   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:32.221591   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:32.235612   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:32.235624   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:32.247582   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:32.247593   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:32.271697   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:32.271704   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:32.284677   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:32.284688   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:32.289023   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:32.289032   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:32.317800   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:32.317811   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:32.332264   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:32.332276   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:34.851068   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:39.852224   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:39.852487   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:39.874852   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:39.874959   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:39.890310   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:39.890385   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:39.904721   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:39.904786   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:39.915528   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:39.915601   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:39.925767   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:39.925830   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:39.937053   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:39.937119   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:39.947132   14042 logs.go:276] 0 containers: []
	W0327 14:05:39.947142   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:39.947202   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:39.958927   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:39.958946   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:39.958951   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:39.963423   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:39.963433   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:39.988103   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:39.988115   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:40.006289   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:40.006301   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:40.019533   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:40.019545   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:40.039397   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:40.039409   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:40.064871   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:40.064878   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:40.077598   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:40.077614   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:40.091609   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:40.091623   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:40.126372   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:40.126382   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:40.137840   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:40.137858   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:40.177707   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:40.177730   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:40.199852   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:40.199864   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:40.215596   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:40.215610   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:40.227565   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:40.227579   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:40.244967   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:40.244981   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:42.759032   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:47.761319   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:47.761531   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:47.778823   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:47.778908   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:47.794310   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:47.794373   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:47.805523   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:47.805584   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:47.816044   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:47.816113   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:47.826824   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:47.826893   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:47.837423   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:47.837489   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:47.847635   14042 logs.go:276] 0 containers: []
	W0327 14:05:47.847645   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:47.847694   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:47.858143   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:47.858161   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:47.858167   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:47.894510   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:47.894519   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:47.919232   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:47.919245   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:47.934398   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:47.934411   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:47.964185   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:47.964196   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:48.002929   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:48.002941   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:48.017041   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:48.017051   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:48.041787   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:48.041799   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:48.053351   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:48.053360   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:48.065230   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:48.065239   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:48.069639   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:48.069647   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:48.084118   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:48.084129   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:48.094969   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:48.094979   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:48.109335   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:48.109345   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:48.126441   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:48.126453   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:48.140670   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:48.140681   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:50.654406   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:05:55.656937   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:05:55.657221   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:05:55.683081   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:05:55.683185   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:05:55.698462   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:05:55.698540   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:05:55.711380   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:05:55.711453   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:05:55.726832   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:05:55.726911   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:05:55.738282   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:05:55.738349   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:05:55.748869   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:05:55.748934   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:05:55.759134   14042 logs.go:276] 0 containers: []
	W0327 14:05:55.759143   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:05:55.759198   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:05:55.769333   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:05:55.769353   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:05:55.769358   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:05:55.783048   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:05:55.783058   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:05:55.794406   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:05:55.794418   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:05:55.806419   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:05:55.806430   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:05:55.830013   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:05:55.830024   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:05:55.865901   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:05:55.865911   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:05:55.869960   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:05:55.869966   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:05:55.883752   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:05:55.883762   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:05:55.895971   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:05:55.895983   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:05:55.907482   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:05:55.907495   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:05:55.943335   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:05:55.943346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:05:55.970526   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:05:55.970537   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:05:55.985004   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:05:55.985018   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:05:56.000640   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:05:56.000651   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:05:56.018157   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:05:56.018172   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:05:56.030009   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:05:56.030020   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:05:58.543659   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:03.545954   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:03.546224   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:03.568284   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:03.568387   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:03.583146   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:03.583219   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:03.595689   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:03.595762   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:03.606371   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:03.606447   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:03.616399   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:03.616475   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:03.626736   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:03.626801   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:03.637003   14042 logs.go:276] 0 containers: []
	W0327 14:06:03.637014   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:03.637072   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:03.646875   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:03.646894   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:03.646900   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:03.660688   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:03.660701   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:03.675626   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:03.675637   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:03.713282   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:03.713290   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:03.747306   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:03.747319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:03.759013   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:03.759022   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:03.773596   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:03.773607   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:03.790875   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:03.790889   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:03.813998   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:03.814008   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:03.818226   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:03.818235   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:03.842452   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:03.842464   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:03.856463   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:03.856473   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:03.867870   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:03.867884   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:03.885270   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:03.885280   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:03.899157   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:03.899167   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:03.910956   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:03.910966   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:06.425846   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:11.428095   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:11.428279   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:11.442749   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:11.442824   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:11.454523   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:11.454593   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:11.467054   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:11.467126   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:11.478308   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:11.478380   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:11.488838   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:11.488908   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:11.499373   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:11.499442   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:11.511522   14042 logs.go:276] 0 containers: []
	W0327 14:06:11.511535   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:11.511595   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:11.522526   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:11.522548   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:11.522554   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:11.547616   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:11.547627   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:11.562392   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:11.562403   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:11.575544   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:11.575555   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:11.587920   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:11.587933   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:11.608542   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:11.608556   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:11.632123   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:11.632134   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:11.649626   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:11.649638   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:11.661580   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:11.661590   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:11.684800   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:11.684808   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:11.721249   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:11.721259   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:11.725463   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:11.725470   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:11.765049   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:11.765061   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:11.779624   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:11.779635   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:11.790688   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:11.790701   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:11.805922   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:11.805935   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:14.319638   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:19.321893   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:19.322124   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:19.343538   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:19.343633   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:19.365486   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:19.365568   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:19.377024   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:19.377094   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:19.390680   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:19.390757   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:19.401109   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:19.401175   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:19.411427   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:19.411502   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:19.421556   14042 logs.go:276] 0 containers: []
	W0327 14:06:19.421566   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:19.421622   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:19.432935   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:19.432955   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:19.432961   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:19.469493   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:19.469502   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:19.493595   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:19.493604   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:19.505202   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:19.505213   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:19.510652   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:19.510664   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:19.534676   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:19.534686   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:19.546534   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:19.546544   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:19.561615   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:19.561625   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:19.578813   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:19.578824   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:19.614354   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:19.614366   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:19.628039   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:19.628049   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:19.641758   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:19.641768   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:19.653869   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:19.653879   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:19.665618   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:19.665628   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:19.680215   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:19.680227   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:19.691785   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:19.691796   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:22.208804   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:27.211047   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:27.211223   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:27.223388   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:27.223468   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:27.235418   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:27.235492   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:27.245948   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:27.246017   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:27.259663   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:27.259742   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:27.270584   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:27.270650   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:27.280919   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:27.280984   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:27.291055   14042 logs.go:276] 0 containers: []
	W0327 14:06:27.291071   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:27.291129   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:27.301285   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:27.301302   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:27.301308   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:27.312827   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:27.312842   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:27.316840   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:27.316850   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:27.350317   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:27.350330   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:27.369401   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:27.369412   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:27.380908   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:27.380919   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:27.403434   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:27.403443   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:27.427848   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:27.427865   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:27.449255   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:27.449264   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:27.460937   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:27.460957   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:27.478357   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:27.478368   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:27.504867   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:27.504879   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:27.516360   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:27.516372   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:27.534697   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:27.534707   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:27.573965   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:27.573974   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:27.588407   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:27.588418   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:30.107345   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:35.109836   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:35.110138   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:35.137860   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:35.137992   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:35.157243   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:35.157322   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:35.176575   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:35.176648   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:35.190290   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:35.190360   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:35.200859   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:35.200919   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:35.216708   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:35.216775   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:35.227076   14042 logs.go:276] 0 containers: []
	W0327 14:06:35.227087   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:35.227140   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:35.238003   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:35.238021   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:35.238027   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:35.274800   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:35.274811   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:35.294156   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:35.294166   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:35.330346   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:35.330357   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:35.348614   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:35.348624   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:35.363342   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:35.363357   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:35.376604   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:35.376616   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:35.400241   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:35.400249   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:35.438209   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:35.438219   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:35.449937   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:35.449952   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:35.462199   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:35.462214   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:35.474957   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:35.474968   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:35.486816   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:35.486828   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:35.500947   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:35.500958   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:35.524618   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:35.524628   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:35.536458   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:35.536470   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:38.041315   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:43.043230   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:43.043447   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:43.061137   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:43.061223   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:43.074744   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:43.074816   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:43.085923   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:43.085998   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:43.096251   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:43.096450   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:43.108163   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:43.108232   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:43.125412   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:43.125489   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:43.136536   14042 logs.go:276] 0 containers: []
	W0327 14:06:43.136549   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:43.136607   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:43.147156   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:43.147173   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:43.147180   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:43.165375   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:43.165389   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:43.176885   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:43.176898   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:43.188270   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:43.188279   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:43.226249   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:43.226266   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:43.262356   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:43.262367   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:43.276303   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:43.276319   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:43.296674   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:43.296684   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:43.307841   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:43.307853   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:43.312359   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:43.312364   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:43.335942   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:43.335952   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:43.347541   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:43.347551   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:43.359336   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:43.359348   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:43.381840   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:43.381847   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:43.399648   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:43.399658   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:43.414128   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:43.414138   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:45.929634   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:50.930702   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:50.930846   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:50.941840   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:50.941914   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:50.952749   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:50.952821   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:50.963443   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:50.963502   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:50.973886   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:50.973958   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:50.985613   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:50.985679   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:50.996674   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:50.996745   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:51.006859   14042 logs.go:276] 0 containers: []
	W0327 14:06:51.006873   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:51.006933   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:51.017491   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:51.017510   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:51.017516   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:51.033987   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:51.034000   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:51.052452   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:51.052462   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:51.066295   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:51.066308   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:51.077923   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:51.077934   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:51.091698   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:51.091709   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:51.117250   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:51.117262   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:51.135314   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:51.135325   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:51.146772   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:51.146786   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:51.169988   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:51.169995   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:51.181374   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:51.181388   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:51.185457   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:51.185463   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:51.219536   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:51.219548   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:06:51.232088   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:51.232100   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:51.243423   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:51.243436   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:51.279706   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:51.279719   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:53.795571   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:06:58.797850   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:06:58.798019   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:06:58.809501   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:06:58.809588   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:06:58.820522   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:06:58.820589   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:06:58.831077   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:06:58.831150   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:06:58.841172   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:06:58.841243   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:06:58.851533   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:06:58.851609   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:06:58.865675   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:06:58.865747   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:06:58.875819   14042 logs.go:276] 0 containers: []
	W0327 14:06:58.875830   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:06:58.875888   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:06:58.886437   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:06:58.886454   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:06:58.886460   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:06:58.923265   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:06:58.923279   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:06:58.953178   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:06:58.953188   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:06:58.964409   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:06:58.964421   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:06:58.981935   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:06:58.981947   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:06:58.993650   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:06:58.993660   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:06:59.005462   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:06:59.005476   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:06:59.043143   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:06:59.043152   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:06:59.047754   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:06:59.047760   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:06:59.061845   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:06:59.061858   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:06:59.077698   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:06:59.077709   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:06:59.089191   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:06:59.089201   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:06:59.111552   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:06:59.111561   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:06:59.126186   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:06:59.126196   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:06:59.140006   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:06:59.140017   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:06:59.154442   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:06:59.154452   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:01.667898   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:06.669459   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:06.669624   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:06.691077   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:06.691180   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:06.705287   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:06.705363   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:06.717371   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:06.717439   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:06.737465   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:06.737541   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:06.748106   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:06.748173   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:06.758605   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:06.758676   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:06.768930   14042 logs.go:276] 0 containers: []
	W0327 14:07:06.768943   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:06.769002   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:06.779675   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:06.779712   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:06.779719   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:06.784123   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:06.784129   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:06.800934   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:06.800944   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:06.824818   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:06.824828   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:06.863639   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:06.863654   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:06.898383   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:06.898398   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:06.910456   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:06.910467   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:06.922424   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:06.922437   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:06.935327   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:06.935340   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:06.971166   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:06.971178   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:06.985104   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:06.985119   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:07.000444   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:07.000457   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:07.013347   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:07.013358   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:07.027515   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:07.027524   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:07.042568   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:07.042582   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:07.054036   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:07.054050   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:09.573359   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:14.575607   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:14.575886   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:14.605345   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:14.605474   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:14.623569   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:14.623650   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:14.637306   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:14.637379   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:14.648923   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:14.648995   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:14.659482   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:14.659557   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:14.670024   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:14.670102   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:14.680088   14042 logs.go:276] 0 containers: []
	W0327 14:07:14.680102   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:14.680154   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:14.691450   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:14.691467   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:14.691474   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:14.696766   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:14.696780   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:14.713475   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:14.713490   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:14.741347   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:14.741360   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:14.761762   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:14.761776   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:14.796318   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:14.796330   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:14.811291   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:14.811307   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:14.825876   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:14.825890   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:14.848854   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:14.848861   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:14.886272   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:14.886281   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:14.900436   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:14.900450   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:14.912625   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:14.912639   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:14.929423   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:14.929437   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:14.943131   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:14.943143   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:14.957876   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:14.957890   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:14.982727   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:14.982737   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:17.495661   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:22.497896   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:22.498062   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:22.510389   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:22.510470   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:22.521156   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:22.521229   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:22.531780   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:22.531842   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:22.542300   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:22.542373   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:22.561356   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:22.561432   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:22.574716   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:22.574788   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:22.585165   14042 logs.go:276] 0 containers: []
	W0327 14:07:22.585177   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:22.585234   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:22.595862   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:22.595879   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:22.595884   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:22.608241   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:22.608252   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:22.645634   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:22.645649   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:22.660491   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:22.660504   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:22.676154   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:22.676167   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:22.691347   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:22.691359   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:22.696087   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:22.696096   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:22.716001   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:22.716011   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:22.727131   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:22.727141   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:22.739981   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:22.739993   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:22.763202   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:22.763218   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:22.788676   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:22.788687   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:22.800180   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:22.800192   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:22.819077   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:22.819091   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:22.858061   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:22.858077   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:22.871792   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:22.871804   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:25.385841   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:30.388443   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:30.388769   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:30.418857   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:30.418988   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:30.439826   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:30.439918   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:30.452956   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:30.453027   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:30.464140   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:30.464209   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:30.474509   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:30.474579   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:30.485765   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:30.485835   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:30.497456   14042 logs.go:276] 0 containers: []
	W0327 14:07:30.497468   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:30.497526   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:30.508323   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:30.508340   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:30.508346   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:30.520501   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:30.520513   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:30.532307   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:30.532320   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:30.544099   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:30.544110   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:30.567218   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:30.567230   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:30.571884   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:30.571893   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:30.587415   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:30.587425   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:30.599757   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:30.599767   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:30.634861   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:30.634878   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:30.649171   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:30.649180   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:30.674489   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:30.674500   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:30.688457   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:30.688473   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:30.702813   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:30.702827   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:30.719923   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:30.719933   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:30.734986   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:30.734999   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:30.747170   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:30.747181   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:33.287952   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:38.290274   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:38.290504   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:38.313746   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:38.313865   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:38.332069   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:38.332141   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:38.349514   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:38.349582   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:38.359434   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:38.359507   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:38.374519   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:38.374585   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:38.384914   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:38.384986   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:38.395119   14042 logs.go:276] 0 containers: []
	W0327 14:07:38.395130   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:38.395182   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:38.405810   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:38.405827   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:38.405834   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:38.417778   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:38.417788   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:38.422158   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:38.422165   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:38.434250   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:38.434261   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:38.451254   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:38.451264   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:38.463091   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:38.463102   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:38.499371   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:38.499380   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:38.521095   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:38.521102   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:38.535522   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:38.535534   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:38.547642   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:38.547652   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:38.585902   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:38.585914   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:38.609711   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:38.609722   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:38.623348   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:38.623362   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:38.642065   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:38.642076   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:38.656123   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:38.656133   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:38.670658   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:38.670667   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:41.188897   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:46.191206   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:46.191369   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:07:46.202217   14042 logs.go:276] 2 containers: [35ff2bce470f 7e3614c971ee]
	I0327 14:07:46.202291   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:07:46.213220   14042 logs.go:276] 2 containers: [ff7eec1ecf31 9d8978a7a14e]
	I0327 14:07:46.213292   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:07:46.226405   14042 logs.go:276] 1 containers: [f8c2b689d615]
	I0327 14:07:46.226472   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:07:46.236944   14042 logs.go:276] 2 containers: [da508f6ce75b 5e4db03ec227]
	I0327 14:07:46.237013   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:07:46.249478   14042 logs.go:276] 1 containers: [2848ccb8c6ca]
	I0327 14:07:46.249542   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:07:46.260532   14042 logs.go:276] 2 containers: [8b819b4acb84 048161dfe88e]
	I0327 14:07:46.260601   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:07:46.270672   14042 logs.go:276] 0 containers: []
	W0327 14:07:46.270683   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:07:46.270740   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:07:46.284820   14042 logs.go:276] 1 containers: [06fda8b95995]
	I0327 14:07:46.284836   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:07:46.284842   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:07:46.288860   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:07:46.288870   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:07:46.323051   14042 logs.go:123] Gathering logs for etcd [ff7eec1ecf31] ...
	I0327 14:07:46.323064   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7eec1ecf31"
	I0327 14:07:46.337661   14042 logs.go:123] Gathering logs for kube-scheduler [da508f6ce75b] ...
	I0327 14:07:46.337671   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da508f6ce75b"
	I0327 14:07:46.349278   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:07:46.349289   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 14:07:46.385883   14042 logs.go:123] Gathering logs for kube-apiserver [7e3614c971ee] ...
	I0327 14:07:46.385891   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3614c971ee"
	I0327 14:07:46.409890   14042 logs.go:123] Gathering logs for coredns [f8c2b689d615] ...
	I0327 14:07:46.409900   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c2b689d615"
	I0327 14:07:46.421251   14042 logs.go:123] Gathering logs for kube-apiserver [35ff2bce470f] ...
	I0327 14:07:46.421261   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ff2bce470f"
	I0327 14:07:46.438025   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:07:46.438037   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:07:46.451624   14042 logs.go:123] Gathering logs for etcd [9d8978a7a14e] ...
	I0327 14:07:46.451635   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d8978a7a14e"
	I0327 14:07:46.466885   14042 logs.go:123] Gathering logs for kube-scheduler [5e4db03ec227] ...
	I0327 14:07:46.466895   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4db03ec227"
	I0327 14:07:46.481632   14042 logs.go:123] Gathering logs for kube-proxy [2848ccb8c6ca] ...
	I0327 14:07:46.481642   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2848ccb8c6ca"
	I0327 14:07:46.493234   14042 logs.go:123] Gathering logs for kube-controller-manager [8b819b4acb84] ...
	I0327 14:07:46.493244   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b819b4acb84"
	I0327 14:07:46.511011   14042 logs.go:123] Gathering logs for kube-controller-manager [048161dfe88e] ...
	I0327 14:07:46.511021   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 048161dfe88e"
	I0327 14:07:46.523642   14042 logs.go:123] Gathering logs for storage-provisioner [06fda8b95995] ...
	I0327 14:07:46.523653   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06fda8b95995"
	I0327 14:07:46.535702   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:07:46.535712   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:07:49.060512   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:07:54.063159   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:07:54.063262   14042 kubeadm.go:591] duration metric: took 4m3.30753925s to restartPrimaryControlPlane
	W0327 14:07:54.063360   14042 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 14:07:54.063403   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 14:07:55.142916   14042 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.079512083s)
	I0327 14:07:55.142983   14042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 14:07:55.148227   14042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 14:07:55.151316   14042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 14:07:55.154097   14042 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 14:07:55.154106   14042 kubeadm.go:156] found existing configuration files:
	
	I0327 14:07:55.154129   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf
	I0327 14:07:55.156524   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 14:07:55.156545   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 14:07:55.159051   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf
	I0327 14:07:55.161979   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 14:07:55.162000   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 14:07:55.164404   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf
	I0327 14:07:55.167205   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 14:07:55.167229   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 14:07:55.170283   14042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf
	I0327 14:07:55.172733   14042 kubeadm.go:162] "https://control-plane.minikube.internal:52498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 14:07:55.172756   14042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 14:07:55.175444   14042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 14:07:55.193121   14042 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 14:07:55.193158   14042 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 14:07:55.246597   14042 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 14:07:55.246654   14042 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 14:07:55.246699   14042 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 14:07:55.296137   14042 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 14:07:55.301319   14042 out.go:204]   - Generating certificates and keys ...
	I0327 14:07:55.301391   14042 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 14:07:55.301430   14042 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 14:07:55.301466   14042 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 14:07:55.301504   14042 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 14:07:55.301546   14042 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 14:07:55.301574   14042 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 14:07:55.301602   14042 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 14:07:55.301656   14042 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 14:07:55.301708   14042 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 14:07:55.301782   14042 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 14:07:55.301802   14042 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 14:07:55.301839   14042 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 14:07:55.357770   14042 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 14:07:55.427674   14042 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 14:07:55.599084   14042 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 14:07:55.746491   14042 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 14:07:55.782808   14042 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 14:07:55.783213   14042 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 14:07:55.783234   14042 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 14:07:55.864839   14042 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 14:07:55.869238   14042 out.go:204]   - Booting up control plane ...
	I0327 14:07:55.869288   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 14:07:55.869332   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 14:07:55.869374   14042 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 14:07:55.869427   14042 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 14:07:55.869509   14042 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 14:08:00.373951   14042 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504324 seconds
	I0327 14:08:00.374089   14042 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 14:08:00.382887   14042 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 14:08:00.893345   14042 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 14:08:00.893461   14042 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-077000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 14:08:01.397269   14042 kubeadm.go:309] [bootstrap-token] Using token: mai42w.fgmzuazwr1bj0hq8
	I0327 14:08:01.401666   14042 out.go:204]   - Configuring RBAC rules ...
	I0327 14:08:01.401734   14042 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 14:08:01.401786   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 14:08:01.404452   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 14:08:01.406494   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 14:08:01.407284   14042 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 14:08:01.408102   14042 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 14:08:01.411068   14042 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 14:08:01.565987   14042 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 14:08:01.801305   14042 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 14:08:01.801763   14042 kubeadm.go:309] 
	I0327 14:08:01.801794   14042 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 14:08:01.801797   14042 kubeadm.go:309] 
	I0327 14:08:01.801834   14042 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 14:08:01.801874   14042 kubeadm.go:309] 
	I0327 14:08:01.801888   14042 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 14:08:01.801926   14042 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 14:08:01.801959   14042 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 14:08:01.801965   14042 kubeadm.go:309] 
	I0327 14:08:01.801995   14042 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 14:08:01.802001   14042 kubeadm.go:309] 
	I0327 14:08:01.802024   14042 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 14:08:01.802027   14042 kubeadm.go:309] 
	I0327 14:08:01.802057   14042 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 14:08:01.802115   14042 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 14:08:01.802156   14042 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 14:08:01.802160   14042 kubeadm.go:309] 
	I0327 14:08:01.802203   14042 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 14:08:01.802295   14042 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 14:08:01.802301   14042 kubeadm.go:309] 
	I0327 14:08:01.802385   14042 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mai42w.fgmzuazwr1bj0hq8 \
	I0327 14:08:01.802436   14042 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf \
	I0327 14:08:01.802446   14042 kubeadm.go:309] 	--control-plane 
	I0327 14:08:01.802448   14042 kubeadm.go:309] 
	I0327 14:08:01.802487   14042 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 14:08:01.802518   14042 kubeadm.go:309] 
	I0327 14:08:01.802561   14042 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mai42w.fgmzuazwr1bj0hq8 \
	I0327 14:08:01.802639   14042 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6c0714cdb1f04769bb03c6964de3379945b572d957d3c1e1ebd2217e89609ebf 
	I0327 14:08:01.802773   14042 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 14:08:01.802784   14042 cni.go:84] Creating CNI manager for ""
	I0327 14:08:01.802794   14042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:08:01.809355   14042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 14:08:01.813549   14042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 14:08:01.816556   14042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 14:08:01.821624   14042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 14:08:01.821664   14042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 14:08:01.821691   14042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-077000 minikube.k8s.io/updated_at=2024_03_27T14_08_01_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=stopped-upgrade-077000 minikube.k8s.io/primary=true
	I0327 14:08:01.874039   14042 kubeadm.go:1107] duration metric: took 52.40775ms to wait for elevateKubeSystemPrivileges
	I0327 14:08:01.874048   14042 ops.go:34] apiserver oom_adj: -16
	W0327 14:08:01.874064   14042 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 14:08:01.874069   14042 kubeadm.go:393] duration metric: took 4m11.132158209s to StartCluster
	I0327 14:08:01.874079   14042 settings.go:142] acquiring lock: {Name:mkdd1901c274fdaab611fbdc96cb9f09e61b9c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:08:01.874159   14042 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:08:01.874609   14042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/kubeconfig: {Name:mk85311d9e9c860444c586596759513f7cc3f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:08:01.874813   14042 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:08:01.878413   14042 out.go:177] * Verifying Kubernetes components...
	I0327 14:08:01.874846   14042 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 14:08:01.874891   14042 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:08:01.886412   14042 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-077000"
	I0327 14:08:01.886416   14042 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-077000"
	I0327 14:08:01.886430   14042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-077000"
	I0327 14:08:01.886434   14042 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-077000"
	W0327 14:08:01.886437   14042 addons.go:243] addon storage-provisioner should already be in state true
	I0327 14:08:01.886459   14042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 14:08:01.886467   14042 host.go:66] Checking if "stopped-upgrade-077000" exists ...
	I0327 14:08:01.891442   14042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 14:08:01.895350   14042 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:08:01.895356   14042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 14:08:01.895364   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:08:01.896335   14042 kapi.go:59] client config for stopped-upgrade-077000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/stopped-upgrade-077000/client.key", CAFile:"/Users/jenkins/minikube-integration/18158-11341/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043c3020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 14:08:01.896456   14042 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-077000"
	W0327 14:08:01.896463   14042 addons.go:243] addon default-storageclass should already be in state true
	I0327 14:08:01.896474   14042 host.go:66] Checking if "stopped-upgrade-077000" exists ...
	I0327 14:08:01.897384   14042 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 14:08:01.897393   14042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 14:08:01.897399   14042 sshutil.go:53] new ssh client: &{IP:localhost Port:52464 SSHKeyPath:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/stopped-upgrade-077000/id_rsa Username:docker}
	I0327 14:08:01.977338   14042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 14:08:01.982038   14042 api_server.go:52] waiting for apiserver process to appear ...
	I0327 14:08:01.982082   14042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 14:08:01.985642   14042 api_server.go:72] duration metric: took 110.818666ms to wait for apiserver process to appear ...
	I0327 14:08:01.985649   14042 api_server.go:88] waiting for apiserver healthz status ...
	I0327 14:08:01.985656   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:01.997457   14042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 14:08:01.998334   14042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 14:08:06.987690   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:06.987725   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:11.987910   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:11.987953   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:16.988529   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:16.988600   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:21.989022   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:21.989062   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:26.990122   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:26.990143   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:31.990971   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:31.990993   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 14:08:32.353876   14042 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 14:08:32.359116   14042 out.go:177] * Enabled addons: storage-provisioner
	I0327 14:08:32.366891   14042 addons.go:505] duration metric: took 30.492475542s for enable addons: enabled=[storage-provisioner]
	I0327 14:08:36.992050   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:36.992073   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:41.993719   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:41.993737   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:46.995635   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:46.995663   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:51.997756   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:51.997783   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:08:56.999876   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:08:56.999919   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:09:02.002068   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:09:02.002167   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:09:02.015635   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:09:02.015709   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:09:02.025650   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:09:02.025715   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:09:02.036382   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:09:02.036459   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:09:02.047035   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:09:02.047102   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:09:02.057398   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:09:02.057467   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:09:02.068182   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:09:02.068262   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:09:02.078420   14042 logs.go:276] 0 containers: []
	W0327 14:09:02.078436   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:09:02.078497   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:09:02.089014   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:09:02.089035   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:09:02.089040   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:09:02.093856   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:09:02.093864   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:09:02.108192   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:09:02.108206   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:09:02.120129   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:09:02.120141   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:09:02.132339   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:09:02.132353   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:09:02.149732   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:09:02.149744   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:09:02.161481   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:09:02.161494   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:09:02.179275   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:09:02.179286   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:09:02.190297   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:09:02.190308   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:09:02.214955   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:09:02.214962   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:09:02.247761   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:02.247853   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:02.249301   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:09:02.249305   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:09:02.288766   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:09:02.288778   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:09:02.303243   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:09:02.303257   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:09:02.316908   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:02.316920   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:09:02.316944   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:09:02.316948   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:02.316952   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:02.316956   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:02.316958   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:09:12.320955   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:09:17.323214   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:09:17.323387   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:09:17.340236   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:09:17.340324   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:09:17.353004   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:09:17.353077   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:09:17.363749   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:09:17.363816   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:09:17.373640   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:09:17.373705   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:09:17.384129   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:09:17.384190   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:09:17.397683   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:09:17.397749   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:09:17.408078   14042 logs.go:276] 0 containers: []
	W0327 14:09:17.408090   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:09:17.408148   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:09:17.418261   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:09:17.418276   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:09:17.418282   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:09:17.422453   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:09:17.422458   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:09:17.459493   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:09:17.459508   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:09:17.474114   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:09:17.474128   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:09:17.485464   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:09:17.485477   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:09:17.498688   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:09:17.498702   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:09:17.511090   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:09:17.511104   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:09:17.535422   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:09:17.535430   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:09:17.568328   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:17.568419   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:17.569913   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:09:17.569917   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:09:17.583986   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:09:17.583996   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:09:17.597741   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:09:17.597752   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:09:17.609888   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:09:17.609899   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:09:17.621526   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:09:17.621540   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:09:17.639258   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:17.639267   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:09:17.639292   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:09:17.639296   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:17.639299   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:17.639304   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:17.639307   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:09:27.643397   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:09:32.645900   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:09:32.646016   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:09:32.657590   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:09:32.657660   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:09:32.668424   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:09:32.668513   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:09:32.679409   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:09:32.679480   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:09:32.691034   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:09:32.691109   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:09:32.702117   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:09:32.702192   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:09:32.713033   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:09:32.713119   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:09:32.729891   14042 logs.go:276] 0 containers: []
	W0327 14:09:32.729903   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:09:32.729959   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:09:32.741608   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:09:32.741629   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:09:32.741637   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:09:32.756948   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:09:32.756959   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:09:32.769586   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:09:32.769597   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:09:32.787624   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:09:32.787634   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:09:32.812978   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:09:32.812990   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:09:32.826270   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:09:32.826281   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:09:32.859689   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:32.859780   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:32.861271   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:09:32.861279   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:09:32.875459   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:09:32.875472   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:09:32.886957   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:09:32.886967   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:09:32.898806   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:09:32.898817   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:09:32.910539   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:09:32.910550   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:09:32.914770   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:09:32.914780   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:09:32.950524   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:09:32.950535   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:09:32.965197   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:32.965207   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:09:32.965234   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:09:32.965240   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:32.965243   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:32.965247   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:32.965251   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:09:42.967141   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:09:47.969497   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:09:47.970072   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:09:48.010594   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:09:48.010734   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:09:48.033000   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:09:48.033096   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:09:48.047326   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:09:48.047399   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:09:48.059446   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:09:48.059508   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:09:48.070419   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:09:48.070477   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:09:48.081494   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:09:48.081569   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:09:48.091844   14042 logs.go:276] 0 containers: []
	W0327 14:09:48.091856   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:09:48.091913   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:09:48.105883   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:09:48.105897   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:09:48.105903   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:09:48.122425   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:09:48.122438   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:09:48.144922   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:09:48.144935   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:09:48.157214   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:09:48.157228   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:09:48.181984   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:09:48.181991   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:09:48.196295   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:09:48.196307   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:09:48.210891   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:09:48.210900   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:09:48.248037   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:09:48.248049   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:09:48.262208   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:09:48.262221   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:09:48.276354   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:09:48.276365   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:09:48.288260   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:09:48.288270   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:09:48.299114   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:09:48.299126   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:09:48.330213   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:48.330306   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:48.331746   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:09:48.331751   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:09:48.335694   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:48.335701   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:09:48.335724   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:09:48.335728   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:09:48.335731   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:09:48.335735   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:09:48.335738   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:09:58.339889   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:10:03.342454   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:10:03.343098   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:10:03.389293   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:10:03.389411   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:10:03.407473   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:10:03.407571   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:10:03.424425   14042 logs.go:276] 2 containers: [f0f456d8e56c a3a4092b2360]
	I0327 14:10:03.424499   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:10:03.436003   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:10:03.436070   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:10:03.446329   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:10:03.446400   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:10:03.457442   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:10:03.457508   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:10:03.467446   14042 logs.go:276] 0 containers: []
	W0327 14:10:03.467459   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:10:03.467515   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:10:03.477925   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:10:03.477940   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:10:03.477947   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:10:03.509371   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:03.509463   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:03.510918   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:10:03.510926   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:10:03.546162   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:10:03.546173   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:10:03.557933   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:10:03.557945   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:10:03.569440   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:10:03.569452   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:10:03.580718   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:10:03.580729   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:10:03.592500   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:10:03.592508   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:10:03.611038   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:10:03.611046   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:10:03.634705   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:10:03.634713   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:10:03.638937   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:10:03.638945   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:10:03.655223   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:10:03.655236   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:10:03.669462   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:10:03.669475   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:10:03.681433   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:10:03.681445   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:10:03.697254   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:03.698565   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:10:03.698594   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:10:03.698599   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:03.698613   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:03.698618   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:03.698622   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:13.701490   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:10:18.702950   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:10:18.703313   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:10:18.733455   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:10:18.733570   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:10:18.751965   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:10:18.752059   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:10:18.766453   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:10:18.766535   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:10:18.781854   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:10:18.781927   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:10:18.792298   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:10:18.792358   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:10:18.807137   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:10:18.807204   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:10:18.817439   14042 logs.go:276] 0 containers: []
	W0327 14:10:18.817453   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:10:18.817516   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:10:18.828110   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:10:18.828126   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:10:18.828132   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:10:18.842254   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:10:18.842267   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:10:18.858232   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:10:18.858246   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:10:18.869824   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:10:18.869837   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:10:18.887302   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:10:18.887316   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:10:18.920223   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:18.920316   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:18.921811   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:10:18.921815   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:10:18.933565   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:10:18.933576   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:10:18.945520   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:10:18.945533   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:10:18.970005   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:10:18.970012   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:10:18.981165   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:10:18.981175   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:10:19.017059   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:10:19.017072   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:10:19.028947   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:10:19.028962   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:10:19.043433   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:10:19.043444   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:10:19.054423   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:10:19.054433   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:10:19.058639   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:10:19.058647   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:10:19.070616   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:19.070626   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:10:19.070652   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:10:19.070657   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:19.070661   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:19.070666   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:19.070669   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:29.074744   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:10:34.077110   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:10:34.077520   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:10:34.118601   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:10:34.118734   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:10:34.140808   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:10:34.140919   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:10:34.156313   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:10:34.156396   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:10:34.168831   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:10:34.168907   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:10:34.179718   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:10:34.179784   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:10:34.190050   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:10:34.190117   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:10:34.200249   14042 logs.go:276] 0 containers: []
	W0327 14:10:34.200260   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:10:34.200320   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:10:34.210458   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:10:34.210475   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:10:34.210480   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:10:34.243596   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:34.243686   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:34.245082   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:10:34.245087   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:10:34.256682   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:10:34.256691   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:10:34.268807   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:10:34.268817   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:10:34.286056   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:10:34.286066   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:10:34.302208   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:10:34.302218   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:10:34.318628   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:10:34.318642   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:10:34.330153   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:10:34.330163   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:10:34.334537   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:10:34.334545   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:10:34.369935   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:10:34.369949   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:10:34.381685   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:10:34.381693   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:10:34.393335   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:10:34.393344   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:10:34.405437   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:10:34.405448   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:10:34.417152   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:10:34.417160   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:10:34.432074   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:10:34.432084   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:10:34.455937   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:34.455945   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:10:34.455966   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:10:34.455970   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:34.455973   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:34.455977   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:34.455981   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:44.460084   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:10:49.461365   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:10:49.461854   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:10:49.503072   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:10:49.503205   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:10:49.524373   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:10:49.524484   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:10:49.539033   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:10:49.539106   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:10:49.551104   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:10:49.551171   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:10:49.563374   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:10:49.563436   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:10:49.574125   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:10:49.574203   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:10:49.584572   14042 logs.go:276] 0 containers: []
	W0327 14:10:49.584582   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:10:49.584640   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:10:49.595558   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:10:49.595576   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:10:49.595582   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:10:49.599675   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:10:49.599683   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:10:49.611128   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:10:49.611140   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:10:49.635539   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:10:49.635546   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:10:49.655020   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:10:49.655031   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:10:49.668402   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:10:49.668416   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:10:49.681063   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:10:49.681074   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:10:49.699456   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:10:49.699469   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:10:49.711133   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:10:49.711145   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:10:49.744481   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:10:49.744491   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:10:49.756828   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:10:49.756841   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:10:49.788479   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:49.788573   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:49.790014   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:10:49.790021   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:10:49.823945   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:10:49.823958   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:10:49.835671   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:10:49.835683   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:10:49.847288   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:10:49.847299   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:10:49.861585   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:49.861594   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:10:49.861622   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:10:49.861627   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:10:49.861631   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:10:49.861635   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:49.861638   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:59.863716   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:11:04.865965   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:11:04.866221   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:11:04.891948   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:11:04.892067   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:11:04.909726   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:11:04.909807   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:11:04.923481   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:11:04.923548   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:11:04.938398   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:11:04.938463   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:11:04.953155   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:11:04.953212   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:11:04.963590   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:11:04.963657   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:11:04.973664   14042 logs.go:276] 0 containers: []
	W0327 14:11:04.973674   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:11:04.973730   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:11:04.984563   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:11:04.984579   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:11:04.984585   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:11:04.989374   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:11:04.989383   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:11:05.014215   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:11:05.014228   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:11:05.025687   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:11:05.025700   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:11:05.043190   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:11:05.043202   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:11:05.057115   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:11:05.057126   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:11:05.068221   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:11:05.068229   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:11:05.088025   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:11:05.088040   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:11:05.106628   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:11:05.106639   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:11:05.138501   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:05.138592   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:05.140082   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:11:05.140087   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:11:05.174101   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:11:05.174115   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:11:05.185780   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:11:05.185791   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:11:05.201048   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:11:05.201060   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:11:05.214847   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:11:05.214859   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:11:05.226401   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:11:05.226412   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:11:05.238264   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:05.238276   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:11:05.238302   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:11:05.238308   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:05.238312   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:05.238316   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:05.238318   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:15.241105   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:11:20.243241   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:11:20.243426   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:11:20.260906   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:11:20.260984   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:11:20.271343   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:11:20.271411   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:11:20.282304   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:11:20.282368   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:11:20.292493   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:11:20.292551   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:11:20.302718   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:11:20.302782   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:11:20.314309   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:11:20.314391   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:11:20.325566   14042 logs.go:276] 0 containers: []
	W0327 14:11:20.325577   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:11:20.325654   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:11:20.337363   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:11:20.337382   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:11:20.337387   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:11:20.350714   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:11:20.350726   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:11:20.366922   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:11:20.366934   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:11:20.379133   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:11:20.379145   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:11:20.403270   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:11:20.403285   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:11:20.436735   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:20.436838   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:20.438319   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:11:20.438327   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:11:20.450829   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:11:20.450843   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:11:20.464427   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:11:20.464438   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:11:20.476695   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:11:20.476708   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:11:20.488977   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:11:20.488987   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:11:20.526546   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:11:20.526558   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:11:20.540590   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:11:20.540603   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:11:20.555068   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:11:20.555083   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:11:20.573525   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:11:20.573536   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:11:20.578434   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:11:20.578447   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:11:20.594067   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:20.594079   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:11:20.594108   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:11:20.594112   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:20.594117   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:20.594122   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:20.594125   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:30.598178   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:11:35.599984   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:11:35.600421   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:11:35.650124   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:11:35.650237   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:11:35.668083   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:11:35.668164   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:11:35.681686   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:11:35.681755   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:11:35.693567   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:11:35.693633   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:11:35.704299   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:11:35.704370   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:11:35.718486   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:11:35.718553   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:11:35.729108   14042 logs.go:276] 0 containers: []
	W0327 14:11:35.729121   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:11:35.729179   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:11:35.739562   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:11:35.739580   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:11:35.739585   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:11:35.751529   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:11:35.751540   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:11:35.784075   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:35.784165   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:35.785583   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:11:35.785590   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:11:35.790060   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:11:35.790066   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:11:35.804286   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:11:35.804297   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:11:35.815778   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:11:35.815787   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:11:35.830330   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:11:35.830341   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:11:35.841910   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:11:35.841922   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:11:35.852898   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:11:35.852909   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:11:35.864407   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:11:35.864418   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:11:35.876680   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:11:35.876691   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:11:35.893920   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:11:35.893929   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:11:35.917162   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:11:35.917172   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:11:35.952197   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:11:35.952206   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:11:35.966490   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:11:35.966502   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:11:35.978193   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:35.978204   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:11:35.978228   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:11:35.978233   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:35.978237   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:35.978241   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:35.978245   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:45.982151   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:11:50.984414   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:11:50.984894   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 14:11:51.021113   14042 logs.go:276] 1 containers: [437293aa055e]
	I0327 14:11:51.021249   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 14:11:51.042890   14042 logs.go:276] 1 containers: [d216fbfd3cd1]
	I0327 14:11:51.043002   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 14:11:51.058777   14042 logs.go:276] 4 containers: [5dff5ae36035 042593f6951a f0f456d8e56c a3a4092b2360]
	I0327 14:11:51.058842   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 14:11:51.075123   14042 logs.go:276] 1 containers: [62de682f2860]
	I0327 14:11:51.075182   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 14:11:51.085392   14042 logs.go:276] 1 containers: [a3388c73b872]
	I0327 14:11:51.085462   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 14:11:51.095484   14042 logs.go:276] 1 containers: [ec0ee582a94a]
	I0327 14:11:51.095550   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 14:11:51.105468   14042 logs.go:276] 0 containers: []
	W0327 14:11:51.105477   14042 logs.go:278] No container was found matching "kindnet"
	I0327 14:11:51.105525   14042 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 14:11:51.116085   14042 logs.go:276] 1 containers: [fc31add6a051]
	I0327 14:11:51.116102   14042 logs.go:123] Gathering logs for kube-scheduler [62de682f2860] ...
	I0327 14:11:51.116108   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62de682f2860"
	I0327 14:11:51.130533   14042 logs.go:123] Gathering logs for kube-controller-manager [ec0ee582a94a] ...
	I0327 14:11:51.130544   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0ee582a94a"
	I0327 14:11:51.150991   14042 logs.go:123] Gathering logs for storage-provisioner [fc31add6a051] ...
	I0327 14:11:51.151000   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc31add6a051"
	I0327 14:11:51.163065   14042 logs.go:123] Gathering logs for Docker ...
	I0327 14:11:51.163078   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 14:11:51.186781   14042 logs.go:123] Gathering logs for describe nodes ...
	I0327 14:11:51.186789   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 14:11:51.224654   14042 logs.go:123] Gathering logs for dmesg ...
	I0327 14:11:51.224666   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 14:11:51.229582   14042 logs.go:123] Gathering logs for container status ...
	I0327 14:11:51.229592   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 14:11:51.241183   14042 logs.go:123] Gathering logs for kubelet ...
	I0327 14:11:51.241194   14042 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 14:11:51.273160   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:51.273253   14042 logs.go:138] Found kubelet problem: Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:51.274647   14042 logs.go:123] Gathering logs for coredns [5dff5ae36035] ...
	I0327 14:11:51.274652   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dff5ae36035"
	I0327 14:11:51.293456   14042 logs.go:123] Gathering logs for coredns [f0f456d8e56c] ...
	I0327 14:11:51.293467   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0f456d8e56c"
	I0327 14:11:51.305398   14042 logs.go:123] Gathering logs for kube-proxy [a3388c73b872] ...
	I0327 14:11:51.305409   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3388c73b872"
	I0327 14:11:51.317310   14042 logs.go:123] Gathering logs for etcd [d216fbfd3cd1] ...
	I0327 14:11:51.317321   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d216fbfd3cd1"
	I0327 14:11:51.330864   14042 logs.go:123] Gathering logs for coredns [042593f6951a] ...
	I0327 14:11:51.330877   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 042593f6951a"
	I0327 14:11:51.342668   14042 logs.go:123] Gathering logs for coredns [a3a4092b2360] ...
	I0327 14:11:51.342681   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a4092b2360"
	I0327 14:11:51.354779   14042 logs.go:123] Gathering logs for kube-apiserver [437293aa055e] ...
	I0327 14:11:51.354789   14042 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 437293aa055e"
	I0327 14:11:51.369554   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:51.369565   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 14:11:51.369589   14042 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 14:11:51.369593   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: W0327 21:08:14.697305   10150 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	W0327 14:11:51.369596   14042 out.go:239]   Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	  Mar 27 21:08:14 stopped-upgrade-077000 kubelet[10150]: E0327 21:08:14.697351   10150 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-077000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-077000' and this object
	I0327 14:11:51.369601   14042 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:51.369604   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:01.373624   14042 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 14:12:06.376395   14042 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 14:12:06.381461   14042 out.go:177] 
	W0327 14:12:06.386482   14042 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 14:12:06.386507   14042 out.go:239] * 
	* 
	W0327 14:12:06.388932   14042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:06.398410   14042 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-077000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.99s)

                                                
                                    
x
+
TestPause/serial/Start (10.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-208000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-208000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.249498292s)

                                                
                                                
-- stdout --
	* [pause-208000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-208000" primary control-plane node in "pause-208000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-208000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-208000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-208000 -n pause-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-208000 -n pause-208000: exit status 7 (62.052666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 : exit status 80 (9.813370209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-529000" primary control-plane node in "NoKubernetes-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000: exit status 7 (68.000917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 : exit status 80 (6.305267375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-529000
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000: exit status 7 (63.452416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 : exit status 80 (5.8363565s)

                                                
                                                
-- stdout --
	* [NoKubernetes-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-529000
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000: exit status 7 (51.2915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 : exit status 80 (6.306523916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-529000
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-529000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-529000 -n NoKubernetes-529000: exit status 7 (61.914041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.866821083s)

                                                
                                                
-- stdout --
	* [auto-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-487000" primary control-plane node in "auto-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:10:46.142952   14379 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:10:46.143084   14379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:46.143087   14379 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:46.143090   14379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:46.143222   14379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:10:46.144290   14379 out.go:298] Setting JSON to false
	I0327 14:10:46.160720   14379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7816,"bootTime":1711566030,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:10:46.160786   14379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:10:46.166860   14379 out.go:177] * [auto-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:10:46.173816   14379 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:10:46.177804   14379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:10:46.173856   14379 notify.go:220] Checking for updates...
	I0327 14:10:46.180826   14379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:10:46.183789   14379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:10:46.186757   14379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:10:46.189776   14379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:10:46.191526   14379 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:10:46.191585   14379 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:10:46.191639   14379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:10:46.195742   14379 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:10:46.202615   14379 start.go:297] selected driver: qemu2
	I0327 14:10:46.202620   14379 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:10:46.202625   14379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:10:46.204637   14379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:10:46.207771   14379 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:10:46.210875   14379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:10:46.210916   14379 cni.go:84] Creating CNI manager for ""
	I0327 14:10:46.210922   14379 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:10:46.210926   14379 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:10:46.210957   14379 start.go:340] cluster config:
	{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPat
h:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:10:46.214993   14379 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:10:46.221765   14379 out.go:177] * Starting "auto-487000" primary control-plane node in "auto-487000" cluster
	I0327 14:10:46.225817   14379 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:10:46.225829   14379 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:10:46.225838   14379 cache.go:56] Caching tarball of preloaded images
	I0327 14:10:46.225881   14379 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:10:46.225886   14379 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:10:46.225948   14379 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/auto-487000/config.json ...
	I0327 14:10:46.225958   14379 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/auto-487000/config.json: {Name:mk46e06c85d74f67e7e5923c820a931d2daf36e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:10:46.226161   14379 start.go:360] acquireMachinesLock for auto-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:10:46.226188   14379 start.go:364] duration metric: took 22µs to acquireMachinesLock for "auto-487000"
	I0327 14:10:46.226198   14379 start.go:93] Provisioning new machine with config: &{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-487
000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:10:46.226228   14379 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:10:46.233806   14379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:10:46.248335   14379 start.go:159] libmachine.API.Create for "auto-487000" (driver="qemu2")
	I0327 14:10:46.248371   14379 client.go:168] LocalClient.Create starting
	I0327 14:10:46.248435   14379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:10:46.248465   14379 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:46.248473   14379 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:46.248516   14379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:10:46.248541   14379 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:46.248548   14379 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:46.248889   14379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:10:46.387204   14379 main.go:141] libmachine: Creating SSH key...
	I0327 14:10:46.510119   14379 main.go:141] libmachine: Creating Disk image...
	I0327 14:10:46.510128   14379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:10:46.510305   14379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:46.522584   14379 main.go:141] libmachine: STDOUT: 
	I0327 14:10:46.522604   14379 main.go:141] libmachine: STDERR: 
	I0327 14:10:46.522657   14379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2 +20000M
	I0327 14:10:46.533431   14379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:10:46.533448   14379 main.go:141] libmachine: STDERR: 
	I0327 14:10:46.533467   14379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:46.533474   14379 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:10:46.533504   14379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:22:0b:37:17:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:46.535175   14379 main.go:141] libmachine: STDOUT: 
	I0327 14:10:46.535191   14379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:10:46.535212   14379 client.go:171] duration metric: took 286.8385ms to LocalClient.Create
	I0327 14:10:48.537441   14379 start.go:128] duration metric: took 2.311212792s to createHost
	I0327 14:10:48.537559   14379 start.go:83] releasing machines lock for "auto-487000", held for 2.311394208s
	W0327 14:10:48.537627   14379 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:10:48.553853   14379 out.go:177] * Deleting "auto-487000" in qemu2 ...
	W0327 14:10:48.579615   14379 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:10:48.579651   14379 start.go:728] Will try again in 5 seconds ...
	I0327 14:10:53.581828   14379 start.go:360] acquireMachinesLock for auto-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:10:53.582317   14379 start.go:364] duration metric: took 359.541µs to acquireMachinesLock for "auto-487000"
	I0327 14:10:53.582484   14379 start.go:93] Provisioning new machine with config: &{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-487
000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:10:53.582716   14379 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:10:53.590294   14379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:10:53.638727   14379 start.go:159] libmachine.API.Create for "auto-487000" (driver="qemu2")
	I0327 14:10:53.638778   14379 client.go:168] LocalClient.Create starting
	I0327 14:10:53.638896   14379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:10:53.638957   14379 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:53.639012   14379 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:53.639085   14379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:10:53.639129   14379 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:53.639140   14379 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:53.639652   14379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:10:53.830741   14379 main.go:141] libmachine: Creating SSH key...
	I0327 14:10:53.911823   14379 main.go:141] libmachine: Creating Disk image...
	I0327 14:10:53.911831   14379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:10:53.912041   14379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:53.924506   14379 main.go:141] libmachine: STDOUT: 
	I0327 14:10:53.924532   14379 main.go:141] libmachine: STDERR: 
	I0327 14:10:53.924597   14379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2 +20000M
	I0327 14:10:53.935997   14379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:10:53.936019   14379 main.go:141] libmachine: STDERR: 
	I0327 14:10:53.936039   14379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:53.936044   14379 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:10:53.936076   14379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:62:cb:82:0d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/auto-487000/disk.qcow2
	I0327 14:10:53.937863   14379 main.go:141] libmachine: STDOUT: 
	I0327 14:10:53.937878   14379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:10:53.937892   14379 client.go:171] duration metric: took 299.108708ms to LocalClient.Create
	I0327 14:10:55.940052   14379 start.go:128] duration metric: took 2.357334666s to createHost
	I0327 14:10:55.940138   14379 start.go:83] releasing machines lock for "auto-487000", held for 2.357828667s
	W0327 14:10:55.940476   14379 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:10:55.954204   14379 out.go:177] 
	W0327 14:10:55.957263   14379 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:10:55.957287   14379 out.go:239] * 
	* 
	W0327 14:10:55.958562   14379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:10:55.970089   14379 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.748097917s)

                                                
                                                
-- stdout --
	* [calico-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-487000" primary control-plane node in "calico-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:10:58.306622   14493 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:10:58.306771   14493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:58.306774   14493 out.go:304] Setting ErrFile to fd 2...
	I0327 14:10:58.306776   14493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:10:58.306887   14493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:10:58.308023   14493 out.go:298] Setting JSON to false
	I0327 14:10:58.324513   14493 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7828,"bootTime":1711566030,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:10:58.324570   14493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:10:58.329708   14493 out.go:177] * [calico-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:10:58.337669   14493 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:10:58.340509   14493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:10:58.337718   14493 notify.go:220] Checking for updates...
	I0327 14:10:58.346598   14493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:10:58.348094   14493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:10:58.351586   14493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:10:58.354679   14493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:10:58.357971   14493 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:10:58.358034   14493 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:10:58.358080   14493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:10:58.362620   14493 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:10:58.369617   14493 start.go:297] selected driver: qemu2
	I0327 14:10:58.369622   14493 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:10:58.369627   14493 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:10:58.371715   14493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:10:58.374525   14493 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:10:58.377705   14493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:10:58.377740   14493 cni.go:84] Creating CNI manager for "calico"
	I0327 14:10:58.377744   14493 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0327 14:10:58.377766   14493 start.go:340] cluster config:
	{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:10:58.382005   14493 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:10:58.388575   14493 out.go:177] * Starting "calico-487000" primary control-plane node in "calico-487000" cluster
	I0327 14:10:58.392597   14493 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:10:58.392609   14493 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:10:58.392617   14493 cache.go:56] Caching tarball of preloaded images
	I0327 14:10:58.392668   14493 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:10:58.392673   14493 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:10:58.392728   14493 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/calico-487000/config.json ...
	I0327 14:10:58.392740   14493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/calico-487000/config.json: {Name:mk195d4b16d634846e0fe99922bbbc9f95c01082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:10:58.392936   14493 start.go:360] acquireMachinesLock for calico-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:10:58.392964   14493 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "calico-487000"
	I0327 14:10:58.392976   14493 start.go:93] Provisioning new machine with config: &{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico
-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:10:58.393003   14493 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:10:58.400598   14493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:10:58.416153   14493 start.go:159] libmachine.API.Create for "calico-487000" (driver="qemu2")
	I0327 14:10:58.416178   14493 client.go:168] LocalClient.Create starting
	I0327 14:10:58.416242   14493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:10:58.416274   14493 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:58.416299   14493 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:58.416344   14493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:10:58.416365   14493 main.go:141] libmachine: Decoding PEM data...
	I0327 14:10:58.416371   14493 main.go:141] libmachine: Parsing certificate...
	I0327 14:10:58.416700   14493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:10:58.554291   14493 main.go:141] libmachine: Creating SSH key...
	I0327 14:10:58.600852   14493 main.go:141] libmachine: Creating Disk image...
	I0327 14:10:58.600857   14493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:10:58.601009   14493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:10:58.613129   14493 main.go:141] libmachine: STDOUT: 
	I0327 14:10:58.613157   14493 main.go:141] libmachine: STDERR: 
	I0327 14:10:58.613220   14493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2 +20000M
	I0327 14:10:58.624163   14493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:10:58.624179   14493 main.go:141] libmachine: STDERR: 
	I0327 14:10:58.624195   14493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:10:58.624199   14493 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:10:58.624228   14493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:0b:11:59:31:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:10:58.625884   14493 main.go:141] libmachine: STDOUT: 
	I0327 14:10:58.625898   14493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:10:58.625917   14493 client.go:171] duration metric: took 209.73725ms to LocalClient.Create
	I0327 14:11:00.627688   14493 start.go:128] duration metric: took 2.2346855s to createHost
	I0327 14:11:00.627900   14493 start.go:83] releasing machines lock for "calico-487000", held for 2.234908042s
	W0327 14:11:00.627974   14493 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:00.640158   14493 out.go:177] * Deleting "calico-487000" in qemu2 ...
	W0327 14:11:00.668425   14493 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:00.668460   14493 start.go:728] Will try again in 5 seconds ...
	I0327 14:11:05.670549   14493 start.go:360] acquireMachinesLock for calico-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:05.670707   14493 start.go:364] duration metric: took 119.708µs to acquireMachinesLock for "calico-487000"
	I0327 14:11:05.670744   14493 start.go:93] Provisioning new machine with config: &{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico
-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:05.670798   14493 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:05.676924   14493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:05.693171   14493 start.go:159] libmachine.API.Create for "calico-487000" (driver="qemu2")
	I0327 14:11:05.693199   14493 client.go:168] LocalClient.Create starting
	I0327 14:11:05.693258   14493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:05.693302   14493 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:05.693311   14493 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:05.693345   14493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:05.693366   14493 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:05.693370   14493 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:05.693644   14493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:05.833953   14493 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:05.953440   14493 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:05.953447   14493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:05.953620   14493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:11:05.966141   14493 main.go:141] libmachine: STDOUT: 
	I0327 14:11:05.966161   14493 main.go:141] libmachine: STDERR: 
	I0327 14:11:05.966214   14493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2 +20000M
	I0327 14:11:05.976924   14493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:05.976945   14493 main.go:141] libmachine: STDERR: 
	I0327 14:11:05.976960   14493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:11:05.976965   14493 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:05.976998   14493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:91:b3:f5:66:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/calico-487000/disk.qcow2
	I0327 14:11:05.978857   14493 main.go:141] libmachine: STDOUT: 
	I0327 14:11:05.978872   14493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:05.978885   14493 client.go:171] duration metric: took 285.686875ms to LocalClient.Create
	I0327 14:11:07.981093   14493 start.go:128] duration metric: took 2.310293042s to createHost
	I0327 14:11:07.981226   14493 start.go:83] releasing machines lock for "calico-487000", held for 2.310527417s
	W0327 14:11:07.981633   14493 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:07.997303   14493 out.go:177] 
	W0327 14:11:08.001448   14493 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:11:08.001477   14493 out.go:239] * 
	* 
	W0327 14:11:08.004149   14493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:11:08.012348   14493 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.790421084s)

                                                
                                                
-- stdout --
	* [custom-flannel-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-487000" primary control-plane node in "custom-flannel-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:11:10.548020   14611 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:11:10.548160   14611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:10.548163   14611 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:10.548165   14611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:10.548319   14611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:11:10.549426   14611 out.go:298] Setting JSON to false
	I0327 14:11:10.565831   14611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7840,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:11:10.565900   14611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:11:10.572259   14611 out.go:177] * [custom-flannel-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:11:10.578127   14611 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:11:10.581267   14611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:11:10.578148   14611 notify.go:220] Checking for updates...
	I0327 14:11:10.587180   14611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:11:10.590251   14611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:11:10.593299   14611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:11:10.594762   14611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:11:10.598619   14611 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:11:10.598679   14611 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:11:10.598717   14611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:11:10.603290   14611 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:11:10.609211   14611 start.go:297] selected driver: qemu2
	I0327 14:11:10.609216   14611 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:11:10.609221   14611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:11:10.611422   14611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:11:10.614268   14611 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:11:10.618141   14611 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:11:10.618195   14611 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0327 14:11:10.618208   14611 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0327 14:11:10.618238   14611 start.go:340] cluster config:
	{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socke
t_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:11:10.622646   14611 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:11:10.630307   14611 out.go:177] * Starting "custom-flannel-487000" primary control-plane node in "custom-flannel-487000" cluster
	I0327 14:11:10.634197   14611 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:11:10.634211   14611 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:11:10.634222   14611 cache.go:56] Caching tarball of preloaded images
	I0327 14:11:10.634274   14611 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:11:10.634279   14611 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:11:10.634342   14611 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/custom-flannel-487000/config.json ...
	I0327 14:11:10.634352   14611 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/custom-flannel-487000/config.json: {Name:mk7d670b78eda2b92a340ad09e7439fda9065067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:11:10.634559   14611 start.go:360] acquireMachinesLock for custom-flannel-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:10.634590   14611 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "custom-flannel-487000"
	I0327 14:11:10.634602   14611 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:10.634657   14611 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:10.643200   14611 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:10.657997   14611 start.go:159] libmachine.API.Create for "custom-flannel-487000" (driver="qemu2")
	I0327 14:11:10.658019   14611 client.go:168] LocalClient.Create starting
	I0327 14:11:10.658078   14611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:10.658106   14611 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:10.658115   14611 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:10.658154   14611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:10.658175   14611 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:10.658182   14611 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:10.658502   14611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:10.796552   14611 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:10.908435   14611 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:10.908442   14611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:10.908628   14611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:10.920628   14611 main.go:141] libmachine: STDOUT: 
	I0327 14:11:10.920647   14611 main.go:141] libmachine: STDERR: 
	I0327 14:11:10.920702   14611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2 +20000M
	I0327 14:11:10.931468   14611 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:10.931485   14611 main.go:141] libmachine: STDERR: 
	I0327 14:11:10.931503   14611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:10.931508   14611 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:10.931546   14611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:36:76:c6:f0:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:10.933250   14611 main.go:141] libmachine: STDOUT: 
	I0327 14:11:10.933266   14611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:10.933284   14611 client.go:171] duration metric: took 275.264375ms to LocalClient.Create
	I0327 14:11:12.935509   14611 start.go:128] duration metric: took 2.300852625s to createHost
	I0327 14:11:12.935678   14611 start.go:83] releasing machines lock for "custom-flannel-487000", held for 2.301110167s
	W0327 14:11:12.935805   14611 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:12.951010   14611 out.go:177] * Deleting "custom-flannel-487000" in qemu2 ...
	W0327 14:11:12.974684   14611 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:12.974769   14611 start.go:728] Will try again in 5 seconds ...
	I0327 14:11:17.976820   14611 start.go:360] acquireMachinesLock for custom-flannel-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:17.977435   14611 start.go:364] duration metric: took 492.084µs to acquireMachinesLock for "custom-flannel-487000"
	I0327 14:11:17.977524   14611 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:17.977822   14611 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:17.988472   14611 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:18.033700   14611 start.go:159] libmachine.API.Create for "custom-flannel-487000" (driver="qemu2")
	I0327 14:11:18.033748   14611 client.go:168] LocalClient.Create starting
	I0327 14:11:18.033855   14611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:18.033919   14611 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:18.033936   14611 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:18.034006   14611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:18.034048   14611 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:18.034064   14611 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:18.034748   14611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:18.183304   14611 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:18.250588   14611 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:18.250594   14611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:18.250760   14611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:18.263364   14611 main.go:141] libmachine: STDOUT: 
	I0327 14:11:18.263385   14611 main.go:141] libmachine: STDERR: 
	I0327 14:11:18.263440   14611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2 +20000M
	I0327 14:11:18.274830   14611 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:18.274856   14611 main.go:141] libmachine: STDERR: 
	I0327 14:11:18.274869   14611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:18.274873   14611 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:18.274909   14611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:bc:91:d5:31:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0327 14:11:18.276799   14611 main.go:141] libmachine: STDOUT: 
	I0327 14:11:18.276816   14611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:18.276831   14611 client.go:171] duration metric: took 243.081334ms to LocalClient.Create
	I0327 14:11:20.278884   14611 start.go:128] duration metric: took 2.301082125s to createHost
	I0327 14:11:20.278898   14611 start.go:83] releasing machines lock for "custom-flannel-487000", held for 2.301460375s
	W0327 14:11:20.278964   14611 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:20.283167   14611 out.go:177] 
	W0327 14:11:20.288076   14611 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:11:20.288084   14611 out.go:239] * 
	* 
	W0327 14:11:20.288597   14611 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:11:20.299062   14611 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.751931667s)

                                                
                                                
-- stdout --
	* [false-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-487000" primary control-plane node in "false-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:11:22.755256   14729 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:11:22.755520   14729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:22.755526   14729 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:22.755529   14729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:22.755664   14729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:11:22.756983   14729 out.go:298] Setting JSON to false
	I0327 14:11:22.773643   14729 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7852,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:11:22.773712   14729 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:11:22.778729   14729 out.go:177] * [false-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:11:22.786759   14729 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:11:22.786802   14729 notify.go:220] Checking for updates...
	I0327 14:11:22.793755   14729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:11:22.796802   14729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:11:22.799785   14729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:11:22.802812   14729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:11:22.805793   14729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:11:22.809128   14729 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:11:22.809191   14729 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:11:22.809239   14729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:11:22.813787   14729 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:11:22.820732   14729 start.go:297] selected driver: qemu2
	I0327 14:11:22.820738   14729 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:11:22.820745   14729 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:11:22.823108   14729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:11:22.826793   14729 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:11:22.829743   14729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:11:22.829776   14729 cni.go:84] Creating CNI manager for "false"
	I0327 14:11:22.829810   14729 start.go:340] cluster config:
	{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMne
tPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:11:22.834506   14729 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:11:22.842724   14729 out.go:177] * Starting "false-487000" primary control-plane node in "false-487000" cluster
	I0327 14:11:22.846694   14729 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:11:22.846709   14729 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:11:22.846717   14729 cache.go:56] Caching tarball of preloaded images
	I0327 14:11:22.846774   14729 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:11:22.846780   14729 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:11:22.846835   14729 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/false-487000/config.json ...
	I0327 14:11:22.846846   14729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/false-487000/config.json: {Name:mk0889743d530aa1aeff3493027842a0bb880f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:11:22.847050   14729 start.go:360] acquireMachinesLock for false-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:22.847082   14729 start.go:364] duration metric: took 25.416µs to acquireMachinesLock for "false-487000"
	I0327 14:11:22.847093   14729 start.go:93] Provisioning new machine with config: &{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-4
87000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:22.847119   14729 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:22.854680   14729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:22.868961   14729 start.go:159] libmachine.API.Create for "false-487000" (driver="qemu2")
	I0327 14:11:22.868989   14729 client.go:168] LocalClient.Create starting
	I0327 14:11:22.869047   14729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:22.869075   14729 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:22.869087   14729 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:22.869131   14729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:22.869159   14729 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:22.869166   14729 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:22.869501   14729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:23.011962   14729 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:23.097455   14729 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:23.097462   14729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:23.097620   14729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:23.109715   14729 main.go:141] libmachine: STDOUT: 
	I0327 14:11:23.109739   14729 main.go:141] libmachine: STDERR: 
	I0327 14:11:23.109796   14729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2 +20000M
	I0327 14:11:23.120503   14729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:23.120522   14729 main.go:141] libmachine: STDERR: 
	I0327 14:11:23.120538   14729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:23.120543   14729 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:23.120581   14729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:8b:fb:70:61:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:23.122276   14729 main.go:141] libmachine: STDOUT: 
	I0327 14:11:23.122295   14729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:23.122317   14729 client.go:171] duration metric: took 253.326416ms to LocalClient.Create
	I0327 14:11:25.124605   14729 start.go:128] duration metric: took 2.277487667s to createHost
	I0327 14:11:25.124736   14729 start.go:83] releasing machines lock for "false-487000", held for 2.2776765s
	W0327 14:11:25.124835   14729 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:25.132140   14729 out.go:177] * Deleting "false-487000" in qemu2 ...
	W0327 14:11:25.164450   14729 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:25.164494   14729 start.go:728] Will try again in 5 seconds ...
	I0327 14:11:30.166585   14729 start.go:360] acquireMachinesLock for false-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:30.166770   14729 start.go:364] duration metric: took 151.459µs to acquireMachinesLock for "false-487000"
	I0327 14:11:30.166818   14729 start.go:93] Provisioning new machine with config: &{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-4
87000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:30.166901   14729 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:30.177156   14729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:30.196999   14729 start.go:159] libmachine.API.Create for "false-487000" (driver="qemu2")
	I0327 14:11:30.197022   14729 client.go:168] LocalClient.Create starting
	I0327 14:11:30.197089   14729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:30.197127   14729 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:30.197138   14729 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:30.197180   14729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:30.197204   14729 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:30.197213   14729 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:30.197581   14729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:30.336153   14729 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:30.404116   14729 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:30.404123   14729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:30.404300   14729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:30.416737   14729 main.go:141] libmachine: STDOUT: 
	I0327 14:11:30.416759   14729 main.go:141] libmachine: STDERR: 
	I0327 14:11:30.416819   14729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2 +20000M
	I0327 14:11:30.427489   14729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:30.427506   14729 main.go:141] libmachine: STDERR: 
	I0327 14:11:30.427528   14729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:30.427532   14729 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:30.427563   14729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:05:0f:c8:27:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/false-487000/disk.qcow2
	I0327 14:11:30.429292   14729 main.go:141] libmachine: STDOUT: 
	I0327 14:11:30.429309   14729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:30.429321   14729 client.go:171] duration metric: took 232.29875ms to LocalClient.Create
	I0327 14:11:32.431511   14729 start.go:128] duration metric: took 2.26460375s to createHost
	I0327 14:11:32.431637   14729 start.go:83] releasing machines lock for "false-487000", held for 2.26487s
	W0327 14:11:32.432047   14729 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:32.446752   14729 out.go:177] 
	W0327 14:11:32.450894   14729 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:11:32.450929   14729 out.go:239] * 
	* 
	W0327 14:11:32.453315   14729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:11:32.463858   14729 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.851290458s)

                                                
                                                
-- stdout --
	* [kindnet-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-487000" primary control-plane node in "kindnet-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:11:34.760513   14841 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:11:34.760640   14841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:34.760643   14841 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:34.760645   14841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:34.760769   14841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:11:34.761840   14841 out.go:298] Setting JSON to false
	I0327 14:11:34.778264   14841 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7864,"bootTime":1711566030,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:11:34.778337   14841 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:11:34.784286   14841 out.go:177] * [kindnet-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:11:34.792217   14841 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:11:34.796287   14841 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:11:34.792252   14841 notify.go:220] Checking for updates...
	I0327 14:11:34.801203   14841 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:11:34.804219   14841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:11:34.807263   14841 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:11:34.810161   14841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:11:34.813535   14841 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:11:34.813604   14841 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:11:34.813648   14841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:11:34.818188   14841 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:11:34.825199   14841 start.go:297] selected driver: qemu2
	I0327 14:11:34.825205   14841 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:11:34.825212   14841 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:11:34.827522   14841 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:11:34.830177   14841 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:11:34.833183   14841 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:11:34.833222   14841 cni.go:84] Creating CNI manager for "kindnet"
	I0327 14:11:34.833226   14841 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 14:11:34.833258   14841 start.go:340] cluster config:
	{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:11:34.837703   14841 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:11:34.845017   14841 out.go:177] * Starting "kindnet-487000" primary control-plane node in "kindnet-487000" cluster
	I0327 14:11:34.849188   14841 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:11:34.849201   14841 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:11:34.849211   14841 cache.go:56] Caching tarball of preloaded images
	I0327 14:11:34.849265   14841 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:11:34.849271   14841 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:11:34.849340   14841 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kindnet-487000/config.json ...
	I0327 14:11:34.849350   14841 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kindnet-487000/config.json: {Name:mk893120dc72345f832dee8ded036b5ff6ecbb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:11:34.849551   14841 start.go:360] acquireMachinesLock for kindnet-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:34.849579   14841 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "kindnet-487000"
	I0327 14:11:34.849591   14841 start.go:93] Provisioning new machine with config: &{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindn
et-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:34.849615   14841 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:34.855168   14841 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:34.870413   14841 start.go:159] libmachine.API.Create for "kindnet-487000" (driver="qemu2")
	I0327 14:11:34.870433   14841 client.go:168] LocalClient.Create starting
	I0327 14:11:34.870482   14841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:34.870511   14841 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:34.870519   14841 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:34.870558   14841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:34.870579   14841 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:34.870586   14841 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:34.870898   14841 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:35.010697   14841 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:35.131661   14841 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:35.131675   14841 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:35.131851   14841 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:35.144417   14841 main.go:141] libmachine: STDOUT: 
	I0327 14:11:35.144437   14841 main.go:141] libmachine: STDERR: 
	I0327 14:11:35.144496   14841 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2 +20000M
	I0327 14:11:35.155356   14841 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:35.155385   14841 main.go:141] libmachine: STDERR: 
	I0327 14:11:35.155400   14841 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:35.155405   14841 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:35.155434   14841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e9:6c:cf:f6:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:35.157176   14841 main.go:141] libmachine: STDOUT: 
	I0327 14:11:35.157189   14841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:35.157208   14841 client.go:171] duration metric: took 286.774333ms to LocalClient.Create
	I0327 14:11:37.157813   14841 start.go:128] duration metric: took 2.30820475s to createHost
	I0327 14:11:37.157913   14841 start.go:83] releasing machines lock for "kindnet-487000", held for 2.308356917s
	W0327 14:11:37.157996   14841 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:37.169214   14841 out.go:177] * Deleting "kindnet-487000" in qemu2 ...
	W0327 14:11:37.194114   14841 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:37.194148   14841 start.go:728] Will try again in 5 seconds ...
	I0327 14:11:42.196269   14841 start.go:360] acquireMachinesLock for kindnet-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:42.196744   14841 start.go:364] duration metric: took 347.5µs to acquireMachinesLock for "kindnet-487000"
	I0327 14:11:42.196911   14841 start.go:93] Provisioning new machine with config: &{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindn
et-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:42.197100   14841 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:42.201588   14841 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:42.238876   14841 start.go:159] libmachine.API.Create for "kindnet-487000" (driver="qemu2")
	I0327 14:11:42.238929   14841 client.go:168] LocalClient.Create starting
	I0327 14:11:42.239023   14841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:42.239078   14841 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:42.239092   14841 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:42.239154   14841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:42.239190   14841 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:42.239199   14841 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:42.239654   14841 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:42.385751   14841 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:42.509579   14841 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:42.509588   14841 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:42.509768   14841 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:42.522417   14841 main.go:141] libmachine: STDOUT: 
	I0327 14:11:42.522437   14841 main.go:141] libmachine: STDERR: 
	I0327 14:11:42.522493   14841 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2 +20000M
	I0327 14:11:42.533434   14841 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:42.533453   14841 main.go:141] libmachine: STDERR: 
	I0327 14:11:42.533464   14841 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:42.533470   14841 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:42.533507   14841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6b:78:c0:20:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kindnet-487000/disk.qcow2
	I0327 14:11:42.535314   14841 main.go:141] libmachine: STDOUT: 
	I0327 14:11:42.535329   14841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:42.535342   14841 client.go:171] duration metric: took 296.410541ms to LocalClient.Create
	I0327 14:11:44.537505   14841 start.go:128] duration metric: took 2.340405792s to createHost
	I0327 14:11:44.537579   14841 start.go:83] releasing machines lock for "kindnet-487000", held for 2.3408335s
	W0327 14:11:44.538014   14841 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:44.547732   14841 out.go:177] 
	W0327 14:11:44.553640   14841 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:11:44.553708   14841 out.go:239] * 
	* 
	W0327 14:11:44.556703   14841 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:11:44.569735   14841 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.788797833s)

                                                
                                                
-- stdout --
	* [flannel-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-487000" primary control-plane node in "flannel-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:11:46.983574   14961 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:11:46.983700   14961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:46.983702   14961 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:46.983705   14961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:46.983831   14961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:11:46.984871   14961 out.go:298] Setting JSON to false
	I0327 14:11:47.001230   14961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7876,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:11:47.001294   14961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:11:47.005469   14961 out.go:177] * [flannel-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:11:47.013591   14961 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:11:47.017423   14961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:11:47.013649   14961 notify.go:220] Checking for updates...
	I0327 14:11:47.023538   14961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:11:47.026439   14961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:11:47.029473   14961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:11:47.032514   14961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:11:47.035731   14961 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:11:47.035795   14961 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:11:47.035839   14961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:11:47.040442   14961 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:11:47.046492   14961 start.go:297] selected driver: qemu2
	I0327 14:11:47.046498   14961 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:11:47.046504   14961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:11:47.048714   14961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:11:47.051458   14961 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:11:47.054586   14961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:11:47.054631   14961 cni.go:84] Creating CNI manager for "flannel"
	I0327 14:11:47.054635   14961 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0327 14:11:47.054671   14961 start.go:340] cluster config:
	{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:11:47.059201   14961 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:11:47.066475   14961 out.go:177] * Starting "flannel-487000" primary control-plane node in "flannel-487000" cluster
	I0327 14:11:47.070513   14961 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:11:47.070534   14961 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:11:47.070550   14961 cache.go:56] Caching tarball of preloaded images
	I0327 14:11:47.070630   14961 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:11:47.070642   14961 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:11:47.070710   14961 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/flannel-487000/config.json ...
	I0327 14:11:47.070727   14961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/flannel-487000/config.json: {Name:mkd91983984048e7c8634b3dace67b9c7cf42480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:11:47.070932   14961 start.go:360] acquireMachinesLock for flannel-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:47.070964   14961 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "flannel-487000"
	I0327 14:11:47.070975   14961 start.go:93] Provisioning new machine with config: &{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flann
el-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:47.071009   14961 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:47.079496   14961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:47.094476   14961 start.go:159] libmachine.API.Create for "flannel-487000" (driver="qemu2")
	I0327 14:11:47.094503   14961 client.go:168] LocalClient.Create starting
	I0327 14:11:47.094560   14961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:47.094591   14961 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:47.094599   14961 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:47.094647   14961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:47.094668   14961 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:47.094677   14961 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:47.095039   14961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:47.233832   14961 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:47.342204   14961 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:47.342214   14961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:47.342396   14961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:47.354944   14961 main.go:141] libmachine: STDOUT: 
	I0327 14:11:47.354965   14961 main.go:141] libmachine: STDERR: 
	I0327 14:11:47.355031   14961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2 +20000M
	I0327 14:11:47.366386   14961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:47.366408   14961 main.go:141] libmachine: STDERR: 
	I0327 14:11:47.366421   14961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:47.366427   14961 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:47.366453   14961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:2c:5b:e6:50:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:47.368295   14961 main.go:141] libmachine: STDOUT: 
	I0327 14:11:47.368331   14961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:47.368353   14961 client.go:171] duration metric: took 273.84925ms to LocalClient.Create
	I0327 14:11:49.370581   14961 start.go:128] duration metric: took 2.299545333s to createHost
	I0327 14:11:49.370677   14961 start.go:83] releasing machines lock for "flannel-487000", held for 2.299737416s
	W0327 14:11:49.370726   14961 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:49.382727   14961 out.go:177] * Deleting "flannel-487000" in qemu2 ...
	W0327 14:11:49.402722   14961 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:49.402752   14961 start.go:728] Will try again in 5 seconds ...
	I0327 14:11:54.404873   14961 start.go:360] acquireMachinesLock for flannel-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:54.405458   14961 start.go:364] duration metric: took 478.417µs to acquireMachinesLock for "flannel-487000"
	I0327 14:11:54.405661   14961 start.go:93] Provisioning new machine with config: &{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flann
el-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:54.405944   14961 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:54.411557   14961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:54.462194   14961 start.go:159] libmachine.API.Create for "flannel-487000" (driver="qemu2")
	I0327 14:11:54.462249   14961 client.go:168] LocalClient.Create starting
	I0327 14:11:54.462360   14961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:54.462443   14961 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:54.462457   14961 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:54.462521   14961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:54.462568   14961 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:54.462582   14961 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:54.463232   14961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:54.610839   14961 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:54.676823   14961 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:54.676827   14961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:54.676981   14961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:54.689514   14961 main.go:141] libmachine: STDOUT: 
	I0327 14:11:54.689543   14961 main.go:141] libmachine: STDERR: 
	I0327 14:11:54.689596   14961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2 +20000M
	I0327 14:11:54.701134   14961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:54.701151   14961 main.go:141] libmachine: STDERR: 
	I0327 14:11:54.701162   14961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:54.701175   14961 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:54.701211   14961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:93:bf:fb:ed:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/flannel-487000/disk.qcow2
	I0327 14:11:54.703089   14961 main.go:141] libmachine: STDOUT: 
	I0327 14:11:54.703107   14961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:54.703119   14961 client.go:171] duration metric: took 240.865917ms to LocalClient.Create
	I0327 14:11:56.705302   14961 start.go:128] duration metric: took 2.299309625s to createHost
	I0327 14:11:56.705352   14961 start.go:83] releasing machines lock for "flannel-487000", held for 2.299852958s
	W0327 14:11:56.705558   14961 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:11:56.719955   14961 out.go:177] 
	W0327 14:11:56.722978   14961 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:11:56.723002   14961 out.go:239] * 
	* 
	W0327 14:11:56.724364   14961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:11:56.731867   14961 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.190338291s)

                                                
                                                
-- stdout --
	* [enable-default-cni-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-487000" primary control-plane node in "enable-default-cni-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:11:59.221427   15082 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:11:59.221542   15082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:59.221549   15082 out.go:304] Setting ErrFile to fd 2...
	I0327 14:11:59.221551   15082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:11:59.221681   15082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:11:59.222850   15082 out.go:298] Setting JSON to false
	I0327 14:11:59.239440   15082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7889,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:11:59.239508   15082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:11:59.245133   15082 out.go:177] * [enable-default-cni-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:11:59.252020   15082 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:11:59.256100   15082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:11:59.252078   15082 notify.go:220] Checking for updates...
	I0327 14:11:59.260410   15082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:11:59.263140   15082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:11:59.266127   15082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:11:59.269180   15082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:11:59.272422   15082 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:11:59.272495   15082 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:11:59.272542   15082 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:11:59.277144   15082 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:11:59.284122   15082 start.go:297] selected driver: qemu2
	I0327 14:11:59.284128   15082 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:11:59.284136   15082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:11:59.286439   15082 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:11:59.289088   15082 out.go:177] * Automatically selected the socket_vmnet network
	E0327 14:11:59.292188   15082 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0327 14:11:59.292202   15082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:11:59.292232   15082 cni.go:84] Creating CNI manager for "bridge"
	I0327 14:11:59.292236   15082 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:11:59.292278   15082 start.go:340] cluster config:
	{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:11:59.296814   15082 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:11:59.304126   15082 out.go:177] * Starting "enable-default-cni-487000" primary control-plane node in "enable-default-cni-487000" cluster
	I0327 14:11:59.307013   15082 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:11:59.307044   15082 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:11:59.307052   15082 cache.go:56] Caching tarball of preloaded images
	I0327 14:11:59.307102   15082 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:11:59.307107   15082 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:11:59.307181   15082 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/enable-default-cni-487000/config.json ...
	I0327 14:11:59.307194   15082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/enable-default-cni-487000/config.json: {Name:mk128e46c397313eefbcf5c2e91d5b930630e56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:11:59.307410   15082 start.go:360] acquireMachinesLock for enable-default-cni-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:11:59.307441   15082 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "enable-default-cni-487000"
	I0327 14:11:59.307453   15082 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:11:59.307487   15082 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:11:59.315964   15082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:11:59.333531   15082 start.go:159] libmachine.API.Create for "enable-default-cni-487000" (driver="qemu2")
	I0327 14:11:59.333558   15082 client.go:168] LocalClient.Create starting
	I0327 14:11:59.333624   15082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:11:59.333656   15082 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:59.333676   15082 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:59.333719   15082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:11:59.333748   15082 main.go:141] libmachine: Decoding PEM data...
	I0327 14:11:59.333755   15082 main.go:141] libmachine: Parsing certificate...
	I0327 14:11:59.334128   15082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:11:59.474104   15082 main.go:141] libmachine: Creating SSH key...
	I0327 14:11:59.645546   15082 main.go:141] libmachine: Creating Disk image...
	I0327 14:11:59.645557   15082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:11:59.645764   15082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:11:59.658873   15082 main.go:141] libmachine: STDOUT: 
	I0327 14:11:59.658899   15082 main.go:141] libmachine: STDERR: 
	I0327 14:11:59.658964   15082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2 +20000M
	I0327 14:11:59.670112   15082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:11:59.670131   15082 main.go:141] libmachine: STDERR: 
	I0327 14:11:59.670154   15082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:11:59.670160   15082 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:11:59.670189   15082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:ee:c5:0c:d3:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:11:59.672017   15082 main.go:141] libmachine: STDOUT: 
	I0327 14:11:59.672033   15082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:11:59.672055   15082 client.go:171] duration metric: took 338.492375ms to LocalClient.Create
	I0327 14:12:01.674205   15082 start.go:128] duration metric: took 2.366733083s to createHost
	I0327 14:12:01.674252   15082 start.go:83] releasing machines lock for "enable-default-cni-487000", held for 2.366838958s
	W0327 14:12:01.674290   15082 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:01.688161   15082 out.go:177] * Deleting "enable-default-cni-487000" in qemu2 ...
	W0327 14:12:01.703771   15082 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:01.703783   15082 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:06.704681   15082 start.go:360] acquireMachinesLock for enable-default-cni-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:06.704789   15082 start.go:364] duration metric: took 74.708µs to acquireMachinesLock for "enable-default-cni-487000"
	I0327 14:12:06.704804   15082 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:06.704855   15082 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:06.713236   15082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:12:06.728337   15082 start.go:159] libmachine.API.Create for "enable-default-cni-487000" (driver="qemu2")
	I0327 14:12:06.728366   15082 client.go:168] LocalClient.Create starting
	I0327 14:12:06.728422   15082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:06.728452   15082 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:06.728462   15082 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:06.728495   15082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:06.728515   15082 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:06.728525   15082 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:06.729423   15082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:07.140607   15082 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:07.306060   15082 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:07.306068   15082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:07.306248   15082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:12:07.328599   15082 main.go:141] libmachine: STDOUT: 
	I0327 14:12:07.328619   15082 main.go:141] libmachine: STDERR: 
	I0327 14:12:07.328674   15082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2 +20000M
	I0327 14:12:07.340078   15082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:07.340101   15082 main.go:141] libmachine: STDERR: 
	I0327 14:12:07.340116   15082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:12:07.340120   15082 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:07.340165   15082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:30:72:98:68:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0327 14:12:07.342264   15082 main.go:141] libmachine: STDOUT: 
	I0327 14:12:07.342283   15082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:07.342298   15082 client.go:171] duration metric: took 613.936792ms to LocalClient.Create
	I0327 14:12:09.343773   15082 start.go:128] duration metric: took 2.63891775s to createHost
	I0327 14:12:09.343840   15082 start.go:83] releasing machines lock for "enable-default-cni-487000", held for 2.639077083s
	W0327 14:12:09.344153   15082 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:09.353792   15082 out.go:177] 
	W0327 14:12:09.357904   15082 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:09.357921   15082 out.go:239] * 
	* 
	W0327 14:12:09.359599   15082 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:09.367512   15082 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.90601575s)

                                                
                                                
-- stdout --
	* [bridge-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-487000" primary control-plane node in "bridge-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:11.663700   15196 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:11.663818   15196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:11.663822   15196 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:11.663824   15196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:11.663947   15196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:11.664982   15196 out.go:298] Setting JSON to false
	I0327 14:12:11.681774   15196 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7901,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:11.681853   15196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:11.687970   15196 out.go:177] * [bridge-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:11.695011   15196 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:11.699834   15196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:11.695049   15196 notify.go:220] Checking for updates...
	I0327 14:12:11.705938   15196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:11.709049   15196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:11.711928   15196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:11.714990   15196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:11.718354   15196 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:11.718413   15196 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:12:11.718458   15196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:11.721885   15196 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:11.729036   15196 start.go:297] selected driver: qemu2
	I0327 14:12:11.729043   15196 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:11.729051   15196 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:11.731270   15196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:11.733880   15196 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:11.737044   15196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:11.737080   15196 cni.go:84] Creating CNI manager for "bridge"
	I0327 14:12:11.737083   15196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:12:11.737118   15196 start.go:340] cluster config:
	{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:11.741228   15196 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:11.746988   15196 out.go:177] * Starting "bridge-487000" primary control-plane node in "bridge-487000" cluster
	I0327 14:12:11.750937   15196 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:12:11.750950   15196 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:12:11.750958   15196 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:11.751009   15196 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:11.751017   15196 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:12:11.751084   15196 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/bridge-487000/config.json ...
	I0327 14:12:11.751094   15196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/bridge-487000/config.json: {Name:mkdd7777ab61a0e3f5e26ffd32cc5d267d70ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:11.751287   15196 start.go:360] acquireMachinesLock for bridge-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:11.751315   15196 start.go:364] duration metric: took 22.291µs to acquireMachinesLock for "bridge-487000"
	I0327 14:12:11.751326   15196 start.go:93] Provisioning new machine with config: &{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge
-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:11.751368   15196 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:11.758956   15196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:12:11.773441   15196 start.go:159] libmachine.API.Create for "bridge-487000" (driver="qemu2")
	I0327 14:12:11.773464   15196 client.go:168] LocalClient.Create starting
	I0327 14:12:11.773514   15196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:11.773549   15196 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:11.773558   15196 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:11.773600   15196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:11.773621   15196 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:11.773627   15196 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:11.773989   15196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:11.912025   15196 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:12.064794   15196 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:12.064802   15196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:12.064987   15196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:12.077709   15196 main.go:141] libmachine: STDOUT: 
	I0327 14:12:12.077736   15196 main.go:141] libmachine: STDERR: 
	I0327 14:12:12.077799   15196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2 +20000M
	I0327 14:12:12.088673   15196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:12.088694   15196 main.go:141] libmachine: STDERR: 
	I0327 14:12:12.088712   15196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:12.088718   15196 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:12.088748   15196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:bf:97:8a:7a:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:12.090563   15196 main.go:141] libmachine: STDOUT: 
	I0327 14:12:12.090580   15196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:12.090598   15196 client.go:171] duration metric: took 317.134458ms to LocalClient.Create
	I0327 14:12:14.092773   15196 start.go:128] duration metric: took 2.341418625s to createHost
	I0327 14:12:14.092822   15196 start.go:83] releasing machines lock for "bridge-487000", held for 2.341533167s
	W0327 14:12:14.092854   15196 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:14.105097   15196 out.go:177] * Deleting "bridge-487000" in qemu2 ...
	W0327 14:12:14.126973   15196 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:14.126994   15196 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:19.129193   15196 start.go:360] acquireMachinesLock for bridge-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:19.129684   15196 start.go:364] duration metric: took 385.625µs to acquireMachinesLock for "bridge-487000"
	I0327 14:12:19.129782   15196 start.go:93] Provisioning new machine with config: &{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge
-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:19.130045   15196 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:19.139748   15196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:12:19.187718   15196 start.go:159] libmachine.API.Create for "bridge-487000" (driver="qemu2")
	I0327 14:12:19.187777   15196 client.go:168] LocalClient.Create starting
	I0327 14:12:19.187890   15196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:19.187963   15196 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:19.187977   15196 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:19.188036   15196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:19.188077   15196 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:19.188093   15196 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:19.188646   15196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:19.337872   15196 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:19.465280   15196 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:19.465286   15196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:19.465469   15196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:19.478167   15196 main.go:141] libmachine: STDOUT: 
	I0327 14:12:19.478190   15196 main.go:141] libmachine: STDERR: 
	I0327 14:12:19.478275   15196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2 +20000M
	I0327 14:12:19.489226   15196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:19.489245   15196 main.go:141] libmachine: STDERR: 
	I0327 14:12:19.489259   15196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:19.489263   15196 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:19.489300   15196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:5a:69:a3:09:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/bridge-487000/disk.qcow2
	I0327 14:12:19.491112   15196 main.go:141] libmachine: STDOUT: 
	I0327 14:12:19.491129   15196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:19.491143   15196 client.go:171] duration metric: took 303.363833ms to LocalClient.Create
	I0327 14:12:21.493336   15196 start.go:128] duration metric: took 2.363258166s to createHost
	I0327 14:12:21.493449   15196 start.go:83] releasing machines lock for "bridge-487000", held for 2.363744458s
	W0327 14:12:21.493835   15196 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:21.506530   15196 out.go:177] 
	W0327 14:12:21.510662   15196 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:21.510768   15196 out.go:239] * 
	* 
	W0327 14:12:21.513379   15196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:21.524587   15196 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.755538958s)

                                                
                                                
-- stdout --
	* [kubenet-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-487000" primary control-plane node in "kubenet-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:23.984154   15314 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:23.984287   15314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:23.984291   15314 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:23.984294   15314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:23.984430   15314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:23.985602   15314 out.go:298] Setting JSON to false
	I0327 14:12:24.001766   15314 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7913,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:24.001824   15314 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:24.005786   15314 out.go:177] * [kubenet-487000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:24.012762   15314 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:24.012790   15314 notify.go:220] Checking for updates...
	I0327 14:12:24.016701   15314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:24.019713   15314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:24.022766   15314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:24.025707   15314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:24.028786   15314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:24.032089   15314 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:24.032151   15314 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:12:24.032213   15314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:24.035759   15314 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:24.042750   15314 start.go:297] selected driver: qemu2
	I0327 14:12:24.042756   15314 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:24.042762   15314 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:24.044985   15314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:24.047696   15314 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:24.050831   15314 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:24.050887   15314 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0327 14:12:24.050920   15314 start.go:340] cluster config:
	{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:24.055274   15314 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:24.062741   15314 out.go:177] * Starting "kubenet-487000" primary control-plane node in "kubenet-487000" cluster
	I0327 14:12:24.066717   15314 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:12:24.066745   15314 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:12:24.066753   15314 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:24.066816   15314 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:24.066822   15314 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:12:24.066886   15314 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kubenet-487000/config.json ...
	I0327 14:12:24.066897   15314 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/kubenet-487000/config.json: {Name:mk3e17bd372c4424a779cccf11372a1d4e9cc237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:24.067182   15314 start.go:360] acquireMachinesLock for kubenet-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:24.067217   15314 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "kubenet-487000"
	I0327 14:12:24.067229   15314 start.go:93] Provisioning new machine with config: &{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kuben
et-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:24.067262   15314 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:24.070707   15314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:12:24.086400   15314 start.go:159] libmachine.API.Create for "kubenet-487000" (driver="qemu2")
	I0327 14:12:24.086440   15314 client.go:168] LocalClient.Create starting
	I0327 14:12:24.086498   15314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:24.086525   15314 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:24.086535   15314 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:24.086582   15314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:24.086605   15314 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:24.086610   15314 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:24.086975   15314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:24.224020   15314 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:24.267454   15314 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:24.267460   15314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:24.267633   15314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:24.280008   15314 main.go:141] libmachine: STDOUT: 
	I0327 14:12:24.280028   15314 main.go:141] libmachine: STDERR: 
	I0327 14:12:24.280088   15314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2 +20000M
	I0327 14:12:24.290919   15314 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:24.290936   15314 main.go:141] libmachine: STDERR: 
	I0327 14:12:24.290945   15314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:24.290960   15314 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:24.290988   15314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:18:9f:eb:d8:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:24.292754   15314 main.go:141] libmachine: STDOUT: 
	I0327 14:12:24.292769   15314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:24.292787   15314 client.go:171] duration metric: took 206.344208ms to LocalClient.Create
	I0327 14:12:26.295020   15314 start.go:128] duration metric: took 2.227759041s to createHost
	I0327 14:12:26.295142   15314 start.go:83] releasing machines lock for "kubenet-487000", held for 2.227945375s
	W0327 14:12:26.295200   15314 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:26.310434   15314 out.go:177] * Deleting "kubenet-487000" in qemu2 ...
	W0327 14:12:26.333073   15314 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:26.333097   15314 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:31.335293   15314 start.go:360] acquireMachinesLock for kubenet-487000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:31.335965   15314 start.go:364] duration metric: took 484.542µs to acquireMachinesLock for "kubenet-487000"
	I0327 14:12:31.336097   15314 start.go:93] Provisioning new machine with config: &{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kuben
et-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:31.336352   15314 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:31.341159   15314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 14:12:31.390000   15314 start.go:159] libmachine.API.Create for "kubenet-487000" (driver="qemu2")
	I0327 14:12:31.390071   15314 client.go:168] LocalClient.Create starting
	I0327 14:12:31.390184   15314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:31.390253   15314 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:31.390270   15314 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:31.390343   15314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:31.390385   15314 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:31.390402   15314 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:31.390925   15314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:31.540056   15314 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:31.645170   15314 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:31.645177   15314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:31.645363   15314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:31.657862   15314 main.go:141] libmachine: STDOUT: 
	I0327 14:12:31.657883   15314 main.go:141] libmachine: STDERR: 
	I0327 14:12:31.657934   15314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2 +20000M
	I0327 14:12:31.668913   15314 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:31.668937   15314 main.go:141] libmachine: STDERR: 
	I0327 14:12:31.668949   15314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:31.668956   15314 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:31.668990   15314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cf:a9:c1:f6:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/kubenet-487000/disk.qcow2
	I0327 14:12:31.670904   15314 main.go:141] libmachine: STDOUT: 
	I0327 14:12:31.670922   15314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:31.670936   15314 client.go:171] duration metric: took 280.8635ms to LocalClient.Create
	I0327 14:12:33.673085   15314 start.go:128] duration metric: took 2.336726834s to createHost
	I0327 14:12:33.673146   15314 start.go:83] releasing machines lock for "kubenet-487000", held for 2.337187917s
	W0327 14:12:33.673574   15314 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:33.682025   15314 out.go:177] 
	W0327 14:12:33.688127   15314 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:33.688149   15314 out.go:239] * 
	* 
	W0327 14:12:33.689671   15314 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:33.699016   15314 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.910284667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:35.990268   15424 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:35.990394   15424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:35.990397   15424 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:35.990400   15424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:35.990541   15424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:35.991610   15424 out.go:298] Setting JSON to false
	I0327 14:12:36.007734   15424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7925,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:36.007795   15424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:36.014347   15424 out.go:177] * [old-k8s-version-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:36.021192   15424 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:36.021254   15424 notify.go:220] Checking for updates...
	I0327 14:12:36.025329   15424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:36.028362   15424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:36.029859   15424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:36.033392   15424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:36.036308   15424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:36.039654   15424 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:36.039721   15424 config.go:182] Loaded profile config "stopped-upgrade-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 14:12:36.039766   15424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:36.044378   15424 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:36.051328   15424 start.go:297] selected driver: qemu2
	I0327 14:12:36.051334   15424 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:36.051341   15424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:36.053617   15424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:36.056284   15424 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:36.059339   15424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:36.059376   15424 cni.go:84] Creating CNI manager for ""
	I0327 14:12:36.059383   15424 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 14:12:36.059402   15424 start.go:340] cluster config:
	{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:36.063685   15424 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:36.071320   15424 out.go:177] * Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	I0327 14:12:36.075365   15424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 14:12:36.075383   15424 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 14:12:36.075392   15424 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:36.075456   15424 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:36.075464   15424 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 14:12:36.075539   15424 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/old-k8s-version-462000/config.json ...
	I0327 14:12:36.075550   15424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/old-k8s-version-462000/config.json: {Name:mk58a3a8cb0fa5e175b315141c42f670731be41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:36.075770   15424 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:36.075802   15424 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "old-k8s-version-462000"
	I0327 14:12:36.075814   15424 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNam
e:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:36.075852   15424 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:36.084321   15424 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:36.101104   15424 start.go:159] libmachine.API.Create for "old-k8s-version-462000" (driver="qemu2")
	I0327 14:12:36.101134   15424 client.go:168] LocalClient.Create starting
	I0327 14:12:36.101191   15424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:36.101218   15424 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:36.101231   15424 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:36.101277   15424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:36.101298   15424 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:36.101306   15424 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:36.101690   15424 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:36.239371   15424 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:36.336416   15424 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:36.336424   15424 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:36.336603   15424 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:36.349633   15424 main.go:141] libmachine: STDOUT: 
	I0327 14:12:36.349658   15424 main.go:141] libmachine: STDERR: 
	I0327 14:12:36.349712   15424 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2 +20000M
	I0327 14:12:36.360781   15424 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:36.360801   15424 main.go:141] libmachine: STDERR: 
	I0327 14:12:36.360820   15424 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:36.360824   15424 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:36.360853   15424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ac:70:73:c3:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:36.362550   15424 main.go:141] libmachine: STDOUT: 
	I0327 14:12:36.362565   15424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:36.362583   15424 client.go:171] duration metric: took 261.446125ms to LocalClient.Create
	I0327 14:12:38.364231   15424 start.go:128] duration metric: took 2.28837725s to createHost
	I0327 14:12:38.364308   15424 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 2.288530375s
	W0327 14:12:38.364356   15424 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:38.375861   15424 out.go:177] * Deleting "old-k8s-version-462000" in qemu2 ...
	W0327 14:12:38.398143   15424 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:38.398172   15424 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:43.398522   15424 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:43.399030   15424 start.go:364] duration metric: took 361.917µs to acquireMachinesLock for "old-k8s-version-462000"
	I0327 14:12:43.399202   15424 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNam
e:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:43.399429   15424 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:43.405045   15424 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:43.455037   15424 start.go:159] libmachine.API.Create for "old-k8s-version-462000" (driver="qemu2")
	I0327 14:12:43.455088   15424 client.go:168] LocalClient.Create starting
	I0327 14:12:43.455195   15424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:43.455262   15424 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:43.455279   15424 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:43.455338   15424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:43.455379   15424 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:43.455394   15424 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:43.455899   15424 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:43.604108   15424 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:43.781996   15424 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:43.782006   15424 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:43.785390   15424 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:43.803574   15424 main.go:141] libmachine: STDOUT: 
	I0327 14:12:43.803600   15424 main.go:141] libmachine: STDERR: 
	I0327 14:12:43.803686   15424 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2 +20000M
	I0327 14:12:43.816501   15424 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:43.816517   15424 main.go:141] libmachine: STDERR: 
	I0327 14:12:43.816530   15424 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:43.816544   15424 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:43.816584   15424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:74:ff:8c:16:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:43.818461   15424 main.go:141] libmachine: STDOUT: 
	I0327 14:12:43.818480   15424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:43.818493   15424 client.go:171] duration metric: took 363.403458ms to LocalClient.Create
	I0327 14:12:45.820647   15424 start.go:128] duration metric: took 2.4212135s to createHost
	I0327 14:12:45.820749   15424 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 2.421722792s
	W0327 14:12:45.821138   15424 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:45.838637   15424 out.go:177] 
	W0327 14:12:45.843764   15424 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:45.843791   15424 out.go:239] * 
	* 
	W0327 14:12:45.846343   15424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:45.854505   15424 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (66.8895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (10.049290875s)

                                                
                                                
-- stdout --
	* [no-preload-063000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-063000" primary control-plane node in "no-preload-063000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-063000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:39.566542   15438 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:39.566665   15438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:39.566668   15438 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:39.566670   15438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:39.566794   15438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:39.567889   15438 out.go:298] Setting JSON to false
	I0327 14:12:39.584080   15438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7929,"bootTime":1711566030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:39.584143   15438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:39.587936   15438 out.go:177] * [no-preload-063000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:39.598714   15438 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:39.594789   15438 notify.go:220] Checking for updates...
	I0327 14:12:39.606727   15438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:39.613766   15438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:39.620734   15438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:39.626791   15438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:39.635799   15438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:39.640259   15438 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:39.640354   15438 config.go:182] Loaded profile config "old-k8s-version-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 14:12:39.640405   15438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:39.643765   15438 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:39.650673   15438 start.go:297] selected driver: qemu2
	I0327 14:12:39.650679   15438 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:39.650685   15438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:39.653234   15438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:39.656827   15438 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:39.662387   15438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:39.662438   15438 cni.go:84] Creating CNI manager for ""
	I0327 14:12:39.662448   15438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:12:39.662455   15438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:12:39.662495   15438 start.go:340] cluster config:
	{Name:no-preload-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:39.667861   15438 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.675798   15438 out.go:177] * Starting "no-preload-063000" primary control-plane node in "no-preload-063000" cluster
	I0327 14:12:39.679783   15438 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 14:12:39.679883   15438 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/no-preload-063000/config.json ...
	I0327 14:12:39.679902   15438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/no-preload-063000/config.json: {Name:mk491a5f4eb1b1ddb590a0bb0116b89ea2b9b228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:39.679937   15438 cache.go:107] acquiring lock: {Name:mk95ee8b8889c41cfcc444f65a848f051b38686b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.679972   15438 cache.go:107] acquiring lock: {Name:mkff4cfcea003dbe8067cd3276b43d67621dc705 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.679992   15438 cache.go:107] acquiring lock: {Name:mka1120dac074eef552b5a82b0c7d8cb12b7146e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680019   15438 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 14:12:39.680029   15438 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.5µs
	I0327 14:12:39.680038   15438 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 14:12:39.680049   15438 cache.go:107] acquiring lock: {Name:mk2e8e1fe9a6c9dd7b5e794afa2b661040edbb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680123   15438 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 14:12:39.679963   15438 cache.go:107] acquiring lock: {Name:mk5c82e94aa15cb96709c9fa29db07ea0d4c6e9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680169   15438 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 14:12:39.680180   15438 cache.go:107] acquiring lock: {Name:mkc4a073622b018598a6d2c97e7cc5a9415b626c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680158   15438 cache.go:107] acquiring lock: {Name:mkde073295053178375f8c1b26e7cb31bbc76f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680253   15438 cache.go:107] acquiring lock: {Name:mk1d7f0f6f3c2bf5f20b78b20370e8e8fe7eab80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:39.680392   15438 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 14:12:39.680392   15438 start.go:360] acquireMachinesLock for no-preload-063000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:39.680442   15438 start.go:364] duration metric: took 41.833µs to acquireMachinesLock for "no-preload-063000"
	I0327 14:12:39.680472   15438 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 14:12:39.680483   15438 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0327 14:12:39.680461   15438 start.go:93] Provisioning new machine with config: &{Name:no-preload-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:no-preload-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:39.680507   15438 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:39.680520   15438 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 14:12:39.683789   15438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:39.680660   15438 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 14:12:39.689376   15438 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 14:12:39.690030   15438 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 14:12:39.694590   15438 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 14:12:39.695119   15438 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 14:12:39.695259   15438 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0327 14:12:39.695390   15438 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 14:12:39.695437   15438 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 14:12:39.704007   15438 start.go:159] libmachine.API.Create for "no-preload-063000" (driver="qemu2")
	I0327 14:12:39.704025   15438 client.go:168] LocalClient.Create starting
	I0327 14:12:39.704093   15438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:39.704124   15438 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:39.704134   15438 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:39.704186   15438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:39.704211   15438 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:39.704223   15438 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:39.704581   15438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:39.847516   15438 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:39.909146   15438 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:39.909168   15438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:39.909428   15438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:39.922495   15438 main.go:141] libmachine: STDOUT: 
	I0327 14:12:39.922522   15438 main.go:141] libmachine: STDERR: 
	I0327 14:12:39.922590   15438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2 +20000M
	I0327 14:12:39.934629   15438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:39.934645   15438 main.go:141] libmachine: STDERR: 
	I0327 14:12:39.934660   15438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:39.934664   15438 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:39.934695   15438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:97:4e:0c:31:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:39.936966   15438 main.go:141] libmachine: STDOUT: 
	I0327 14:12:39.936982   15438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:39.937008   15438 client.go:171] duration metric: took 232.980875ms to LocalClient.Create
	I0327 14:12:41.673347   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0327 14:12:41.716375   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0327 14:12:41.762228   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 14:12:41.801178   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0327 14:12:41.807958   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0327 14:12:41.820670   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0327 14:12:41.827701   15438 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0327 14:12:41.938062   15438 start.go:128] duration metric: took 2.25756775s to createHost
	I0327 14:12:41.938106   15438 start.go:83] releasing machines lock for "no-preload-063000", held for 2.257683209s
	W0327 14:12:41.938163   15438 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:41.947071   15438 out.go:177] * Deleting "no-preload-063000" in qemu2 ...
	I0327 14:12:41.957159   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 14:12:41.957202   15438 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.276997833s
	I0327 14:12:41.957225   15438 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	W0327 14:12:41.977468   15438 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:41.977494   15438 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:43.870609   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 14:12:43.870622   15438 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.190632292s
	I0327 14:12:43.870630   15438 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 14:12:44.689849   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 14:12:44.689934   15438 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 5.010052833s
	I0327 14:12:44.689978   15438 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 14:12:44.788746   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 14:12:44.788797   15438 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 5.108721167s
	I0327 14:12:44.788825   15438 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 14:12:46.390635   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 14:12:46.390662   15438 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 6.710762s
	I0327 14:12:46.390689   15438 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 14:12:46.793201   15438 cache.go:157] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 14:12:46.793258   15438 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 7.1134135s
	I0327 14:12:46.793293   15438 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 14:12:46.977570   15438 start.go:360] acquireMachinesLock for no-preload-063000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:46.977749   15438 start.go:364] duration metric: took 134.333µs to acquireMachinesLock for "no-preload-063000"
	I0327 14:12:46.977837   15438 start.go:93] Provisioning new machine with config: &{Name:no-preload-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:no-preload-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:46.977964   15438 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:46.987390   15438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:47.037113   15438 start.go:159] libmachine.API.Create for "no-preload-063000" (driver="qemu2")
	I0327 14:12:47.037168   15438 client.go:168] LocalClient.Create starting
	I0327 14:12:47.037273   15438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:47.037350   15438 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:47.037370   15438 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:47.037451   15438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:47.037489   15438 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:47.037508   15438 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:47.038017   15438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:47.187790   15438 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:47.498131   15438 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:47.498141   15438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:47.498415   15438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:47.511490   15438 main.go:141] libmachine: STDOUT: 
	I0327 14:12:47.511514   15438 main.go:141] libmachine: STDERR: 
	I0327 14:12:47.511584   15438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2 +20000M
	I0327 14:12:47.522712   15438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:47.522730   15438 main.go:141] libmachine: STDERR: 
	I0327 14:12:47.522745   15438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:47.522749   15438 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:47.522796   15438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:69:16:ca:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:47.524664   15438 main.go:141] libmachine: STDOUT: 
	I0327 14:12:47.524692   15438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:47.524705   15438 client.go:171] duration metric: took 487.537875ms to LocalClient.Create
	I0327 14:12:49.525291   15438 start.go:128] duration metric: took 2.547335708s to createHost
	I0327 14:12:49.525305   15438 start.go:83] releasing machines lock for "no-preload-063000", held for 2.547580583s
	W0327 14:12:49.525346   15438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-063000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-063000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:49.539281   15438 out.go:177] 
	W0327 14:12:49.548327   15438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:49.548334   15438 out.go:239] * 
	* 
	W0327 14:12:49.548943   15438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:49.575269   15438 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (37.125209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml: exit status 1 (29.806542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (30.994167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (30.879417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-462000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system: exit status 1 (26.534875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (31.346083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.217605375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:49.423961   15524 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:49.424077   15524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:49.424080   15524 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:49.424083   15524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:49.424200   15524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:49.425278   15524 out.go:298] Setting JSON to false
	I0327 14:12:49.441233   15524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7939,"bootTime":1711566030,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:49.441290   15524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:49.446147   15524 out.go:177] * [old-k8s-version-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:49.453334   15524 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:49.453397   15524 notify.go:220] Checking for updates...
	I0327 14:12:49.461271   15524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:49.465347   15524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:49.468256   15524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:49.471324   15524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:49.478067   15524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:49.482543   15524 config.go:182] Loaded profile config "old-k8s-version-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 14:12:49.486271   15524 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 14:12:49.489237   15524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:49.492283   15524 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:12:49.498275   15524 start.go:297] selected driver: qemu2
	I0327 14:12:49.498280   15524 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:o
ld-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:49.498337   15524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:49.500675   15524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:49.500715   15524 cni.go:84] Creating CNI manager for ""
	I0327 14:12:49.500723   15524 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 14:12:49.500764   15524 start.go:340] cluster config:
	{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:49.505180   15524 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:49.513273   15524 out.go:177] * Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	I0327 14:12:49.516270   15524 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 14:12:49.516286   15524 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 14:12:49.516295   15524 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:49.516353   15524 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:49.516358   15524 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 14:12:49.516412   15524 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/old-k8s-version-462000/config.json ...
	I0327 14:12:49.516740   15524 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:49.525324   15524 start.go:364] duration metric: took 8.577375ms to acquireMachinesLock for "old-k8s-version-462000"
	I0327 14:12:49.525338   15524 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:12:49.525345   15524 fix.go:54] fixHost starting: 
	I0327 14:12:49.525492   15524 fix.go:112] recreateIfNeeded on old-k8s-version-462000: state=Stopped err=<nil>
	W0327 14:12:49.525502   15524 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:12:49.539284   15524 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	I0327 14:12:49.548372   15524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:74:ff:8c:16:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:49.550638   15524 main.go:141] libmachine: STDOUT: 
	I0327 14:12:49.550662   15524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:49.550692   15524 fix.go:56] duration metric: took 25.347ms for fixHost
	I0327 14:12:49.550698   15524 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 25.36775ms
	W0327 14:12:49.550704   15524 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:49.550747   15524 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:49.550753   15524 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:54.552889   15524 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:54.553291   15524 start.go:364] duration metric: took 312.833µs to acquireMachinesLock for "old-k8s-version-462000"
	I0327 14:12:54.553446   15524 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:12:54.553467   15524 fix.go:54] fixHost starting: 
	I0327 14:12:54.554180   15524 fix.go:112] recreateIfNeeded on old-k8s-version-462000: state=Stopped err=<nil>
	W0327 14:12:54.554209   15524 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:12:54.562750   15524 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	I0327 14:12:54.565941   15524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:74:ff:8c:16:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I0327 14:12:54.575889   15524 main.go:141] libmachine: STDOUT: 
	I0327 14:12:54.575954   15524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:54.576023   15524 fix.go:56] duration metric: took 22.559458ms for fixHost
	I0327 14:12:54.576043   15524 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 22.726084ms
	W0327 14:12:54.576201   15524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:54.583775   15524 out.go:177] 
	W0327 14:12:54.586832   15524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:54.586861   15524 out.go:239] * 
	* 
	W0327 14:12:54.589365   15524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:54.597687   15524 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (67.728541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-063000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-063000 create -f testdata/busybox.yaml: exit status 1 (26.95ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-063000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-063000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (31.101417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (31.14275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-063000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-063000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-063000 describe deploy/metrics-server -n kube-system: exit status 1 (27.126542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-063000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-063000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (31.395333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.203475584s)

                                                
                                                
-- stdout --
	* [no-preload-063000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-063000" primary control-plane node in "no-preload-063000" cluster
	* Restarting existing qemu2 VM for "no-preload-063000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-063000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:53.344746   15569 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:53.344885   15569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:53.344889   15569 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:53.344891   15569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:53.345030   15569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:53.346011   15569 out.go:298] Setting JSON to false
	I0327 14:12:53.362021   15569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7943,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:53.362096   15569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:53.366346   15569 out.go:177] * [no-preload-063000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:53.373399   15569 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:53.377303   15569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:53.373474   15569 notify.go:220] Checking for updates...
	I0327 14:12:53.383327   15569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:53.386402   15569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:53.389314   15569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:53.392357   15569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:53.395648   15569 config.go:182] Loaded profile config "no-preload-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 14:12:53.395901   15569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:53.400351   15569 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:12:53.407222   15569 start.go:297] selected driver: qemu2
	I0327 14:12:53.407227   15569 start.go:901] validating driver "qemu2" against &{Name:no-preload-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterNam
e:no-preload-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:53.407277   15569 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:53.409518   15569 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:53.409560   15569 cni.go:84] Creating CNI manager for ""
	I0327 14:12:53.409567   15569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:12:53.409597   15569 start.go:340] cluster config:
	{Name:no-preload-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:53.413982   15569 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.421200   15569 out.go:177] * Starting "no-preload-063000" primary control-plane node in "no-preload-063000" cluster
	I0327 14:12:53.425289   15569 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 14:12:53.425375   15569 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/no-preload-063000/config.json ...
	I0327 14:12:53.425430   15569 cache.go:107] acquiring lock: {Name:mk95ee8b8889c41cfcc444f65a848f051b38686b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425438   15569 cache.go:107] acquiring lock: {Name:mka1120dac074eef552b5a82b0c7d8cb12b7146e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425446   15569 cache.go:107] acquiring lock: {Name:mkff4cfcea003dbe8067cd3276b43d67621dc705 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425491   15569 cache.go:107] acquiring lock: {Name:mk1d7f0f6f3c2bf5f20b78b20370e8e8fe7eab80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425503   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 14:12:53.425509   15569 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.584µs
	I0327 14:12:53.425515   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 14:12:53.425524   15569 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 101µs
	I0327 14:12:53.425528   15569 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 14:12:53.425519   15569 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 14:12:53.425528   15569 cache.go:107] acquiring lock: {Name:mkde073295053178375f8c1b26e7cb31bbc76f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425541   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 14:12:53.425546   15569 cache.go:107] acquiring lock: {Name:mkc4a073622b018598a6d2c97e7cc5a9415b626c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425549   15569 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 58.833µs
	I0327 14:12:53.425559   15569 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0327 14:12:53.425537   15569 cache.go:107] acquiring lock: {Name:mk2e8e1fe9a6c9dd7b5e794afa2b661040edbb5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425582   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 14:12:53.425586   15569 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 40.542µs
	I0327 14:12:53.425591   15569 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 14:12:53.425605   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 14:12:53.425610   15569 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 74.334µs
	I0327 14:12:53.425615   15569 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 14:12:53.425638   15569 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 14:12:53.425654   15569 cache.go:107] acquiring lock: {Name:mk5c82e94aa15cb96709c9fa29db07ea0d4c6e9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:53.425666   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 14:12:53.425678   15569 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 266.25µs
	I0327 14:12:53.425685   15569 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 14:12:53.425699   15569 cache.go:115] /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 14:12:53.425704   15569 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 68.084µs
	I0327 14:12:53.425711   15569 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 14:12:53.425891   15569 start.go:360] acquireMachinesLock for no-preload-063000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:53.425919   15569 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "no-preload-063000"
	I0327 14:12:53.425932   15569 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:12:53.425936   15569 fix.go:54] fixHost starting: 
	I0327 14:12:53.426067   15569 fix.go:112] recreateIfNeeded on no-preload-063000: state=Stopped err=<nil>
	W0327 14:12:53.426075   15569 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:12:53.433299   15569 out.go:177] * Restarting existing qemu2 VM for "no-preload-063000" ...
	I0327 14:12:53.437354   15569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:69:16:ca:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:53.437746   15569 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 14:12:53.439593   15569 main.go:141] libmachine: STDOUT: 
	I0327 14:12:53.439615   15569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:53.439652   15569 fix.go:56] duration metric: took 13.70775ms for fixHost
	I0327 14:12:53.439657   15569 start.go:83] releasing machines lock for "no-preload-063000", held for 13.731292ms
	W0327 14:12:53.439665   15569 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:53.439689   15569 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:53.439693   15569 start.go:728] Will try again in 5 seconds ...
	I0327 14:12:55.362134   15569 cache.go:162] opening:  /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 14:12:58.439925   15569 start.go:360] acquireMachinesLock for no-preload-063000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:58.440303   15569 start.go:364] duration metric: took 304.083µs to acquireMachinesLock for "no-preload-063000"
	I0327 14:12:58.440425   15569 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:12:58.440446   15569 fix.go:54] fixHost starting: 
	I0327 14:12:58.441169   15569 fix.go:112] recreateIfNeeded on no-preload-063000: state=Stopped err=<nil>
	W0327 14:12:58.441196   15569 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:12:58.453720   15569 out.go:177] * Restarting existing qemu2 VM for "no-preload-063000" ...
	I0327 14:12:58.458872   15569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:69:16:ca:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/no-preload-063000/disk.qcow2
	I0327 14:12:58.469411   15569 main.go:141] libmachine: STDOUT: 
	I0327 14:12:58.469485   15569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:58.469561   15569 fix.go:56] duration metric: took 29.118083ms for fixHost
	I0327 14:12:58.469583   15569 start.go:83] releasing machines lock for "no-preload-063000", held for 29.256167ms
	W0327 14:12:58.469775   15569 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-063000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-063000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:58.478695   15569 out.go:177] 
	W0327 14:12:58.482570   15569 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:12:58.482596   15569 out.go:239] * 
	* 
	W0327 14:12:58.484997   15569 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:12:58.494632   15569 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-063000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (66.65625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-462000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (33.181375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-462000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.845417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (31.1045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-462000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (30.632584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1: exit status 83 (46.775417ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-462000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:54.875243   15592 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:54.875622   15592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:54.875626   15592 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:54.875628   15592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:54.875781   15592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:54.876003   15592 out.go:298] Setting JSON to false
	I0327 14:12:54.876012   15592 mustload.go:65] Loading cluster: old-k8s-version-462000
	I0327 14:12:54.876194   15592 config.go:182] Loaded profile config "old-k8s-version-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 14:12:54.880797   15592 out.go:177] * The control-plane node old-k8s-version-462000 host is not running: state=Stopped
	I0327 14:12:54.888686   15592 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-462000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (30.649791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (30.707542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.724122333s)

                                                
                                                
-- stdout --
	* [embed-certs-995000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-995000" primary control-plane node in "embed-certs-995000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-995000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:55.356745   15615 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:55.356865   15615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:55.356869   15615 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:55.356872   15615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:55.357012   15615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:55.358089   15615 out.go:298] Setting JSON to false
	I0327 14:12:55.374608   15615 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7945,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:55.374685   15615 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:55.377031   15615 out.go:177] * [embed-certs-995000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:55.383981   15615 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:55.384051   15615 notify.go:220] Checking for updates...
	I0327 14:12:55.386895   15615 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:55.389927   15615 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:55.392961   15615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:55.394452   15615 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:55.398021   15615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:55.401297   15615 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:55.401354   15615 config.go:182] Loaded profile config "no-preload-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 14:12:55.401406   15615 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:55.405800   15615 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:55.412904   15615 start.go:297] selected driver: qemu2
	I0327 14:12:55.412909   15615 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:55.412914   15615 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:55.415096   15615 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:55.417874   15615 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:55.421055   15615 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:55.421094   15615 cni.go:84] Creating CNI manager for ""
	I0327 14:12:55.421101   15615 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:12:55.421105   15615 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:12:55.421144   15615 start.go:340] cluster config:
	{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:55.425595   15615 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:55.432918   15615 out.go:177] * Starting "embed-certs-995000" primary control-plane node in "embed-certs-995000" cluster
	I0327 14:12:55.436939   15615 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:12:55.436953   15615 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:12:55.436962   15615 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:55.437041   15615 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:55.437054   15615 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:12:55.437139   15615 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/embed-certs-995000/config.json ...
	I0327 14:12:55.437152   15615 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/embed-certs-995000/config.json: {Name:mk91190ce5ad9608fac573b93a27cab0c5c4d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:55.437391   15615 start.go:360] acquireMachinesLock for embed-certs-995000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:55.437423   15615 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "embed-certs-995000"
	I0327 14:12:55.437435   15615 start.go:93] Provisioning new machine with config: &{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:em
bed-certs-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:55.437470   15615 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:55.440934   15615 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:55.457611   15615 start.go:159] libmachine.API.Create for "embed-certs-995000" (driver="qemu2")
	I0327 14:12:55.457638   15615 client.go:168] LocalClient.Create starting
	I0327 14:12:55.457707   15615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:55.457738   15615 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:55.457748   15615 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:55.457791   15615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:55.457812   15615 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:55.457820   15615 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:55.458168   15615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:55.595329   15615 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:55.654792   15615 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:55.654797   15615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:55.654962   15615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:12:55.666988   15615 main.go:141] libmachine: STDOUT: 
	I0327 14:12:55.667005   15615 main.go:141] libmachine: STDERR: 
	I0327 14:12:55.667062   15615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2 +20000M
	I0327 14:12:55.677603   15615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:55.677620   15615 main.go:141] libmachine: STDERR: 
	I0327 14:12:55.677637   15615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:12:55.677641   15615 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:55.677675   15615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:b7:5c:09:70:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:12:55.679357   15615 main.go:141] libmachine: STDOUT: 
	I0327 14:12:55.679369   15615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:55.679388   15615 client.go:171] duration metric: took 221.747958ms to LocalClient.Create
	I0327 14:12:57.679814   15615 start.go:128] duration metric: took 2.242337542s to createHost
	I0327 14:12:57.679884   15615 start.go:83] releasing machines lock for "embed-certs-995000", held for 2.242480334s
	W0327 14:12:57.679937   15615 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:57.693221   15615 out.go:177] * Deleting "embed-certs-995000" in qemu2 ...
	W0327 14:12:57.719846   15615 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:12:57.719873   15615 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:02.721332   15615 start.go:360] acquireMachinesLock for embed-certs-995000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:02.721513   15615 start.go:364] duration metric: took 139.791µs to acquireMachinesLock for "embed-certs-995000"
	I0327 14:13:02.721569   15615 start.go:93] Provisioning new machine with config: &{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:em
bed-certs-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:13:02.721684   15615 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:13:02.726141   15615 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:13:02.756052   15615 start.go:159] libmachine.API.Create for "embed-certs-995000" (driver="qemu2")
	I0327 14:13:02.756080   15615 client.go:168] LocalClient.Create starting
	I0327 14:13:02.756166   15615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:13:02.756215   15615 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:02.756226   15615 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:02.756273   15615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:13:02.756296   15615 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:02.756304   15615 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:02.756599   15615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:13:02.900105   15615 main.go:141] libmachine: Creating SSH key...
	I0327 14:13:02.978540   15615 main.go:141] libmachine: Creating Disk image...
	I0327 14:13:02.978548   15615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:13:02.978731   15615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:13:02.991067   15615 main.go:141] libmachine: STDOUT: 
	I0327 14:13:02.991090   15615 main.go:141] libmachine: STDERR: 
	I0327 14:13:02.991150   15615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2 +20000M
	I0327 14:13:03.001931   15615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:13:03.001948   15615 main.go:141] libmachine: STDERR: 
	I0327 14:13:03.001960   15615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:13:03.001965   15615 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:13:03.002006   15615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:b8:d6:ea:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:13:03.003805   15615 main.go:141] libmachine: STDOUT: 
	I0327 14:13:03.003820   15615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:03.003833   15615 client.go:171] duration metric: took 247.752458ms to LocalClient.Create
	I0327 14:13:05.005987   15615 start.go:128] duration metric: took 2.284307291s to createHost
	I0327 14:13:05.006038   15615 start.go:83] releasing machines lock for "embed-certs-995000", held for 2.284543167s
	W0327 14:13:05.006381   15615 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-995000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-995000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:05.020048   15615 out.go:177] 
	W0327 14:13:05.024089   15615 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:05.024120   15615 out.go:239] * 
	* 
	W0327 14:13:05.026855   15615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:05.035892   15615 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (68.149417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-063000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (32.99675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-063000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-063000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-063000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.685917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-063000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-063000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (30.397084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-063000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (30.710958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-063000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-063000 --alsologtostderr -v=1: exit status 83 (41.715292ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-063000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-063000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:58.778517   15637 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:58.778674   15637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:58.778677   15637 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:58.778679   15637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:58.778804   15637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:58.779008   15637 out.go:298] Setting JSON to false
	I0327 14:12:58.779016   15637 mustload.go:65] Loading cluster: no-preload-063000
	I0327 14:12:58.779210   15637 config.go:182] Loaded profile config "no-preload-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 14:12:58.783194   15637 out.go:177] * The control-plane node no-preload-063000 host is not running: state=Stopped
	I0327 14:12:58.787202   15637 out.go:177]   To start a cluster, run: "minikube start -p no-preload-063000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-063000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (31.20525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (30.757208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-063000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.749418209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-149000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-149000" primary control-plane node in "default-k8s-diff-port-149000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:12:59.485006   15672 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:12:59.485147   15672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:59.485151   15672 out.go:304] Setting ErrFile to fd 2...
	I0327 14:12:59.485153   15672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:12:59.485286   15672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:12:59.486422   15672 out.go:298] Setting JSON to false
	I0327 14:12:59.502692   15672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7949,"bootTime":1711566030,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:12:59.502756   15672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:12:59.507008   15672 out.go:177] * [default-k8s-diff-port-149000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:12:59.514088   15672 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:12:59.518030   15672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:12:59.514153   15672 notify.go:220] Checking for updates...
	I0327 14:12:59.521013   15672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:12:59.524038   15672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:12:59.526971   15672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:12:59.530033   15672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:12:59.533445   15672 config.go:182] Loaded profile config "embed-certs-995000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:59.533507   15672 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:12:59.533554   15672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:12:59.537958   15672 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:12:59.545041   15672 start.go:297] selected driver: qemu2
	I0327 14:12:59.545049   15672 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:12:59.545054   15672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:12:59.547380   15672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 14:12:59.550942   15672 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:12:59.554092   15672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:12:59.554136   15672 cni.go:84] Creating CNI manager for ""
	I0327 14:12:59.554147   15672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:12:59.554151   15672 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:12:59.554187   15672 start.go:340] cluster config:
	{Name:default-k8s-diff-port-149000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:12:59.558641   15672 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:12:59.564018   15672 out.go:177] * Starting "default-k8s-diff-port-149000" primary control-plane node in "default-k8s-diff-port-149000" cluster
	I0327 14:12:59.568019   15672 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:12:59.568039   15672 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:12:59.568054   15672 cache.go:56] Caching tarball of preloaded images
	I0327 14:12:59.568112   15672 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:12:59.568118   15672 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:12:59.568187   15672 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/default-k8s-diff-port-149000/config.json ...
	I0327 14:12:59.568201   15672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/default-k8s-diff-port-149000/config.json: {Name:mkf635fe60f3c14b4a083dd3ff2e6a42ae41fd79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:12:59.568427   15672 start.go:360] acquireMachinesLock for default-k8s-diff-port-149000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:12:59.568462   15672 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "default-k8s-diff-port-149000"
	I0327 14:12:59.568475   15672 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:default-k8s-diff-port-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:12:59.568513   15672 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:12:59.576034   15672 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:12:59.593522   15672 start.go:159] libmachine.API.Create for "default-k8s-diff-port-149000" (driver="qemu2")
	I0327 14:12:59.593550   15672 client.go:168] LocalClient.Create starting
	I0327 14:12:59.593624   15672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:12:59.593659   15672 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:59.593669   15672 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:59.593717   15672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:12:59.593742   15672 main.go:141] libmachine: Decoding PEM data...
	I0327 14:12:59.593750   15672 main.go:141] libmachine: Parsing certificate...
	I0327 14:12:59.594157   15672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:12:59.734326   15672 main.go:141] libmachine: Creating SSH key...
	I0327 14:12:59.764868   15672 main.go:141] libmachine: Creating Disk image...
	I0327 14:12:59.764877   15672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:12:59.765059   15672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:12:59.777434   15672 main.go:141] libmachine: STDOUT: 
	I0327 14:12:59.777455   15672 main.go:141] libmachine: STDERR: 
	I0327 14:12:59.777517   15672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2 +20000M
	I0327 14:12:59.788148   15672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:12:59.788175   15672 main.go:141] libmachine: STDERR: 
	I0327 14:12:59.788190   15672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:12:59.788196   15672 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:12:59.788226   15672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:10:7d:bd:d3:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:12:59.789999   15672 main.go:141] libmachine: STDOUT: 
	I0327 14:12:59.790017   15672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:12:59.790038   15672 client.go:171] duration metric: took 196.484542ms to LocalClient.Create
	I0327 14:13:01.791146   15672 start.go:128] duration metric: took 2.222638916s to createHost
	I0327 14:13:01.791226   15672 start.go:83] releasing machines lock for "default-k8s-diff-port-149000", held for 2.222784833s
	W0327 14:13:01.791274   15672 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:01.797508   15672 out.go:177] * Deleting "default-k8s-diff-port-149000" in qemu2 ...
	W0327 14:13:01.822409   15672 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:01.822450   15672 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:06.824589   15672 start.go:360] acquireMachinesLock for default-k8s-diff-port-149000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:06.825061   15672 start.go:364] duration metric: took 391.125µs to acquireMachinesLock for "default-k8s-diff-port-149000"
	I0327 14:13:06.825240   15672 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:default-k8s-diff-port-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:13:06.825517   15672 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:13:06.836199   15672 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:13:06.885394   15672 start.go:159] libmachine.API.Create for "default-k8s-diff-port-149000" (driver="qemu2")
	I0327 14:13:06.885459   15672 client.go:168] LocalClient.Create starting
	I0327 14:13:06.885575   15672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:13:06.885634   15672 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:06.885648   15672 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:06.885719   15672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:13:06.885759   15672 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:06.885774   15672 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:06.886445   15672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:13:07.034596   15672 main.go:141] libmachine: Creating SSH key...
	I0327 14:13:07.127715   15672 main.go:141] libmachine: Creating Disk image...
	I0327 14:13:07.127721   15672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:13:07.127887   15672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:13:07.140209   15672 main.go:141] libmachine: STDOUT: 
	I0327 14:13:07.140228   15672 main.go:141] libmachine: STDERR: 
	I0327 14:13:07.140283   15672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2 +20000M
	I0327 14:13:07.150954   15672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:13:07.150982   15672 main.go:141] libmachine: STDERR: 
	I0327 14:13:07.150993   15672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:13:07.150998   15672 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:13:07.151030   15672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f4:a5:bf:04:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:13:07.152744   15672 main.go:141] libmachine: STDOUT: 
	I0327 14:13:07.152762   15672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:07.152775   15672 client.go:171] duration metric: took 267.314417ms to LocalClient.Create
	I0327 14:13:09.154943   15672 start.go:128] duration metric: took 2.329426s to createHost
	I0327 14:13:09.155024   15672 start.go:83] releasing machines lock for "default-k8s-diff-port-149000", held for 2.329967834s
	W0327 14:13:09.155374   15672 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:09.168938   15672 out.go:177] 
	W0327 14:13:09.176930   15672 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:09.176965   15672 out.go:239] * 
	* 
	W0327 14:13:09.179587   15672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:09.189960   15672 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (66.394792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-995000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-995000 create -f testdata/busybox.yaml: exit status 1 (29.62225ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-995000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-995000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (30.72075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (30.330041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-995000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-995000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-995000 describe deploy/metrics-server -n kube-system: exit status 1 (27.005916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-995000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-995000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (31.081917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.708391708s)

                                                
                                                
-- stdout --
	* [embed-certs-995000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-995000" primary control-plane node in "embed-certs-995000" cluster
	* Restarting existing qemu2 VM for "embed-certs-995000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-995000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:08.576298   15725 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:08.576440   15725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:08.576443   15725 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:08.576445   15725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:08.576563   15725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:08.577570   15725 out.go:298] Setting JSON to false
	I0327 14:13:08.593604   15725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7958,"bootTime":1711566030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:13:08.593662   15725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:13:08.598930   15725 out.go:177] * [embed-certs-995000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:13:08.605857   15725 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:13:08.605907   15725 notify.go:220] Checking for updates...
	I0327 14:13:08.609925   15725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:13:08.611401   15725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:13:08.614911   15725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:13:08.617925   15725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:13:08.625665   15725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:13:08.629218   15725 config.go:182] Loaded profile config "embed-certs-995000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:08.629478   15725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:13:08.633836   15725 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:13:08.638865   15725 start.go:297] selected driver: qemu2
	I0327 14:13:08.638872   15725 start.go:901] validating driver "qemu2" against &{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed
-certs-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:08.638933   15725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:13:08.641216   15725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:13:08.641289   15725 cni.go:84] Creating CNI manager for ""
	I0327 14:13:08.641296   15725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:13:08.641326   15725 start.go:340] cluster config:
	{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:08.645713   15725 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:13:08.652877   15725 out.go:177] * Starting "embed-certs-995000" primary control-plane node in "embed-certs-995000" cluster
	I0327 14:13:08.656856   15725 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:13:08.656873   15725 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:13:08.656883   15725 cache.go:56] Caching tarball of preloaded images
	I0327 14:13:08.657004   15725 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:13:08.657028   15725 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:13:08.657113   15725 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/embed-certs-995000/config.json ...
	I0327 14:13:08.657610   15725 start.go:360] acquireMachinesLock for embed-certs-995000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:09.155201   15725 start.go:364] duration metric: took 497.554292ms to acquireMachinesLock for "embed-certs-995000"
	I0327 14:13:09.155400   15725 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:09.155419   15725 fix.go:54] fixHost starting: 
	I0327 14:13:09.156094   15725 fix.go:112] recreateIfNeeded on embed-certs-995000: state=Stopped err=<nil>
	W0327 14:13:09.156132   15725 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:09.173028   15725 out.go:177] * Restarting existing qemu2 VM for "embed-certs-995000" ...
	I0327 14:13:09.181140   15725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:b8:d6:ea:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:13:09.191777   15725 main.go:141] libmachine: STDOUT: 
	I0327 14:13:09.191840   15725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:09.191982   15725 fix.go:56] duration metric: took 36.558834ms for fixHost
	I0327 14:13:09.192002   15725 start.go:83] releasing machines lock for "embed-certs-995000", held for 36.712333ms
	W0327 14:13:09.192036   15725 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:09.192194   15725 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:09.192219   15725 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:14.194376   15725 start.go:360] acquireMachinesLock for embed-certs-995000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:14.194745   15725 start.go:364] duration metric: took 279.042µs to acquireMachinesLock for "embed-certs-995000"
	I0327 14:13:14.194862   15725 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:14.194884   15725 fix.go:54] fixHost starting: 
	I0327 14:13:14.195637   15725 fix.go:112] recreateIfNeeded on embed-certs-995000: state=Stopped err=<nil>
	W0327 14:13:14.195683   15725 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:14.201108   15725 out.go:177] * Restarting existing qemu2 VM for "embed-certs-995000" ...
	I0327 14:13:14.208353   15725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:b8:d6:ea:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/embed-certs-995000/disk.qcow2
	I0327 14:13:14.218050   15725 main.go:141] libmachine: STDOUT: 
	I0327 14:13:14.218115   15725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:14.218205   15725 fix.go:56] duration metric: took 23.32275ms for fixHost
	I0327 14:13:14.218224   15725 start.go:83] releasing machines lock for "embed-certs-995000", held for 23.457416ms
	W0327 14:13:14.218390   15725 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-995000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-995000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:14.227152   15725 out.go:177] 
	W0327 14:13:14.230156   15725 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:14.230198   15725 out.go:239] * 
	* 
	W0327 14:13:14.232992   15725 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:14.240111   15725 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (67.978292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-149000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149000 create -f testdata/busybox.yaml: exit status 1 (29.041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-149000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (31.031834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (30.761333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-149000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-149000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149000 describe deploy/metrics-server -n kube-system: exit status 1 (27.279125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-149000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (31.345792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.194785666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-149000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-149000" primary control-plane node in "default-k8s-diff-port-149000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:12.822371   15770 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:12.822485   15770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:12.822489   15770 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:12.822496   15770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:12.822626   15770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:12.823630   15770 out.go:298] Setting JSON to false
	I0327 14:13:12.839677   15770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7962,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:13:12.839746   15770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:13:12.844701   15770 out.go:177] * [default-k8s-diff-port-149000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:13:12.851883   15770 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:13:12.854841   15770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:13:12.851933   15770 notify.go:220] Checking for updates...
	I0327 14:13:12.860813   15770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:13:12.863721   15770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:13:12.866802   15770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:13:12.869834   15770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:13:12.873045   15770 config.go:182] Loaded profile config "default-k8s-diff-port-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:12.873319   15770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:13:12.877789   15770 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:13:12.883803   15770 start.go:297] selected driver: qemu2
	I0327 14:13:12.883809   15770 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:default-k8s-diff-port-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:12.883886   15770 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:13:12.886191   15770 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 14:13:12.886239   15770 cni.go:84] Creating CNI manager for ""
	I0327 14:13:12.886246   15770 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:13:12.886273   15770 start.go:340] cluster config:
	{Name:default-k8s-diff-port-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-149000 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:12.890675   15770 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:13:12.897854   15770 out.go:177] * Starting "default-k8s-diff-port-149000" primary control-plane node in "default-k8s-diff-port-149000" cluster
	I0327 14:13:12.901814   15770 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 14:13:12.901830   15770 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 14:13:12.901838   15770 cache.go:56] Caching tarball of preloaded images
	I0327 14:13:12.901901   15770 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:13:12.901906   15770 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 14:13:12.901970   15770 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/default-k8s-diff-port-149000/config.json ...
	I0327 14:13:12.902459   15770 start.go:360] acquireMachinesLock for default-k8s-diff-port-149000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:12.902485   15770 start.go:364] duration metric: took 21µs to acquireMachinesLock for "default-k8s-diff-port-149000"
	I0327 14:13:12.902494   15770 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:12.902499   15770 fix.go:54] fixHost starting: 
	I0327 14:13:12.902615   15770 fix.go:112] recreateIfNeeded on default-k8s-diff-port-149000: state=Stopped err=<nil>
	W0327 14:13:12.902628   15770 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:12.906769   15770 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-149000" ...
	I0327 14:13:12.914797   15770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f4:a5:bf:04:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:13:12.916828   15770 main.go:141] libmachine: STDOUT: 
	I0327 14:13:12.916850   15770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:12.916881   15770 fix.go:56] duration metric: took 14.381583ms for fixHost
	I0327 14:13:12.916885   15770 start.go:83] releasing machines lock for "default-k8s-diff-port-149000", held for 14.396167ms
	W0327 14:13:12.916892   15770 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:12.916920   15770 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:12.916925   15770 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:17.919053   15770 start.go:360] acquireMachinesLock for default-k8s-diff-port-149000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:17.919493   15770 start.go:364] duration metric: took 326.584µs to acquireMachinesLock for "default-k8s-diff-port-149000"
	I0327 14:13:17.919607   15770 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:17.919627   15770 fix.go:54] fixHost starting: 
	I0327 14:13:17.920348   15770 fix.go:112] recreateIfNeeded on default-k8s-diff-port-149000: state=Stopped err=<nil>
	W0327 14:13:17.920376   15770 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:17.934708   15770 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-149000" ...
	I0327 14:13:17.939710   15770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f4:a5:bf:04:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/default-k8s-diff-port-149000/disk.qcow2
	I0327 14:13:17.949899   15770 main.go:141] libmachine: STDOUT: 
	I0327 14:13:17.949989   15770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:17.950066   15770 fix.go:56] duration metric: took 30.439167ms for fixHost
	I0327 14:13:17.950084   15770 start.go:83] releasing machines lock for "default-k8s-diff-port-149000", held for 30.56675ms
	W0327 14:13:17.950291   15770 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:17.957384   15770 out.go:177] 
	W0327 14:13:17.960675   15770 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:17.960720   15770 out.go:239] * 
	* 
	W0327 14:13:17.963512   15770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:17.973466   15770 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-149000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (67.881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-995000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (32.916209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-995000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-995000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-995000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.334083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-995000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-995000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (31.258708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-995000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (30.606875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-995000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-995000 --alsologtostderr -v=1: exit status 83 (42.86775ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-995000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-995000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:14.517621   15789 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:14.517778   15789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:14.517781   15789 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:14.517783   15789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:14.517924   15789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:14.518143   15789 out.go:298] Setting JSON to false
	I0327 14:13:14.518152   15789 mustload.go:65] Loading cluster: embed-certs-995000
	I0327 14:13:14.518347   15789 config.go:182] Loaded profile config "embed-certs-995000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:14.522349   15789 out.go:177] * The control-plane node embed-certs-995000 host is not running: state=Stopped
	I0327 14:13:14.526145   15789 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-995000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-995000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (30.482125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (30.9495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-995000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.922053875s)

                                                
                                                
-- stdout --
	* [newest-cni-871000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-871000" primary control-plane node in "newest-cni-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:14.984292   15812 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:14.984424   15812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:14.984428   15812 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:14.984430   15812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:14.984553   15812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:14.985639   15812 out.go:298] Setting JSON to false
	I0327 14:13:15.001855   15812 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7964,"bootTime":1711566030,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:13:15.001920   15812 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:13:15.005774   15812 out.go:177] * [newest-cni-871000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:13:15.016551   15812 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:13:15.012668   15812 notify.go:220] Checking for updates...
	I0327 14:13:15.022488   15812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:13:15.025636   15812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:13:15.028677   15812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:13:15.030124   15812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:13:15.033615   15812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:13:15.037058   15812 config.go:182] Loaded profile config "default-k8s-diff-port-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:15.037118   15812 config.go:182] Loaded profile config "multinode-294000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:15.037174   15812 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:13:15.041525   15812 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 14:13:15.048576   15812 start.go:297] selected driver: qemu2
	I0327 14:13:15.048582   15812 start.go:901] validating driver "qemu2" against <nil>
	I0327 14:13:15.048587   15812 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:13:15.050879   15812 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0327 14:13:15.050902   15812 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0327 14:13:15.057581   15812 out.go:177] * Automatically selected the socket_vmnet network
	I0327 14:13:15.060765   15812 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 14:13:15.060816   15812 cni.go:84] Creating CNI manager for ""
	I0327 14:13:15.060825   15812 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:13:15.060831   15812 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 14:13:15.060880   15812 start.go:340] cluster config:
	{Name:newest-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:15.065936   15812 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:13:15.072657   15812 out.go:177] * Starting "newest-cni-871000" primary control-plane node in "newest-cni-871000" cluster
	I0327 14:13:15.076601   15812 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 14:13:15.076616   15812 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 14:13:15.076626   15812 cache.go:56] Caching tarball of preloaded images
	I0327 14:13:15.076679   15812 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:13:15.076685   15812 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 14:13:15.076747   15812 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/newest-cni-871000/config.json ...
	I0327 14:13:15.076759   15812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/newest-cni-871000/config.json: {Name:mk7f141ca5230009f3bc0a6dbeac51b7e49131a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 14:13:15.077081   15812 start.go:360] acquireMachinesLock for newest-cni-871000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:15.077117   15812 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "newest-cni-871000"
	I0327 14:13:15.077130   15812 start.go:93] Provisioning new machine with config: &{Name:newest-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:newest-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:13:15.077167   15812 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:13:15.080681   15812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:13:15.098747   15812 start.go:159] libmachine.API.Create for "newest-cni-871000" (driver="qemu2")
	I0327 14:13:15.098773   15812 client.go:168] LocalClient.Create starting
	I0327 14:13:15.098836   15812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:13:15.098866   15812 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:15.098875   15812 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:15.098920   15812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:13:15.098943   15812 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:15.098950   15812 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:15.099331   15812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:13:15.240166   15812 main.go:141] libmachine: Creating SSH key...
	I0327 14:13:15.487457   15812 main.go:141] libmachine: Creating Disk image...
	I0327 14:13:15.487470   15812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:13:15.487663   15812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:15.500581   15812 main.go:141] libmachine: STDOUT: 
	I0327 14:13:15.500611   15812 main.go:141] libmachine: STDERR: 
	I0327 14:13:15.500688   15812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2 +20000M
	I0327 14:13:15.511677   15812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:13:15.511694   15812 main.go:141] libmachine: STDERR: 
	I0327 14:13:15.511716   15812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:15.511727   15812 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:13:15.511764   15812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:9b:88:1c:53:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:15.513530   15812 main.go:141] libmachine: STDOUT: 
	I0327 14:13:15.513549   15812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:15.513571   15812 client.go:171] duration metric: took 414.79925ms to LocalClient.Create
	I0327 14:13:17.515794   15812 start.go:128] duration metric: took 2.438600917s to createHost
	I0327 14:13:17.515859   15812 start.go:83] releasing machines lock for "newest-cni-871000", held for 2.438766209s
	W0327 14:13:17.515905   15812 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:17.527798   15812 out.go:177] * Deleting "newest-cni-871000" in qemu2 ...
	W0327 14:13:17.554121   15812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:17.554150   15812 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:22.556487   15812 start.go:360] acquireMachinesLock for newest-cni-871000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:22.557014   15812 start.go:364] duration metric: took 337.709µs to acquireMachinesLock for "newest-cni-871000"
	I0327 14:13:22.557155   15812 start.go:93] Provisioning new machine with config: &{Name:newest-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:newest-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 14:13:22.557387   15812 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 14:13:22.566928   15812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 14:13:22.615610   15812 start.go:159] libmachine.API.Create for "newest-cni-871000" (driver="qemu2")
	I0327 14:13:22.615656   15812 client.go:168] LocalClient.Create starting
	I0327 14:13:22.615773   15812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/ca.pem
	I0327 14:13:22.615829   15812 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:22.615846   15812 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:22.615913   15812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18158-11341/.minikube/certs/cert.pem
	I0327 14:13:22.615954   15812 main.go:141] libmachine: Decoding PEM data...
	I0327 14:13:22.615965   15812 main.go:141] libmachine: Parsing certificate...
	I0327 14:13:22.616508   15812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 14:13:22.766539   15812 main.go:141] libmachine: Creating SSH key...
	I0327 14:13:22.804532   15812 main.go:141] libmachine: Creating Disk image...
	I0327 14:13:22.804538   15812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 14:13:22.804719   15812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:22.817053   15812 main.go:141] libmachine: STDOUT: 
	I0327 14:13:22.817075   15812 main.go:141] libmachine: STDERR: 
	I0327 14:13:22.817132   15812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2 +20000M
	I0327 14:13:22.828155   15812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 14:13:22.828190   15812 main.go:141] libmachine: STDERR: 
	I0327 14:13:22.828219   15812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:22.828222   15812 main.go:141] libmachine: Starting QEMU VM...
	I0327 14:13:22.828261   15812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:3f:21:61:3c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:22.829954   15812 main.go:141] libmachine: STDOUT: 
	I0327 14:13:22.829975   15812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:22.829989   15812 client.go:171] duration metric: took 214.330792ms to LocalClient.Create
	I0327 14:13:24.832171   15812 start.go:128] duration metric: took 2.274759334s to createHost
	I0327 14:13:24.832241   15812 start.go:83] releasing machines lock for "newest-cni-871000", held for 2.275231834s
	W0327 14:13:24.832656   15812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:24.841315   15812 out.go:177] 
	W0327 14:13:24.849519   15812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:24.849557   15812 out.go:239] * 
	* 
	W0327 14:13:24.852515   15812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:24.866245   15812 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (68.157792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-871000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-149000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (33.532125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-149000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.769167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (31.018458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-149000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (32.180459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-149000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-149000 --alsologtostderr -v=1: exit status 83 (41.213791ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-149000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-149000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:18.249839   15834 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:18.249981   15834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:18.249984   15834 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:18.249986   15834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:18.250103   15834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:18.250319   15834 out.go:298] Setting JSON to false
	I0327 14:13:18.250328   15834 mustload.go:65] Loading cluster: default-k8s-diff-port-149000
	I0327 14:13:18.250523   15834 config.go:182] Loaded profile config "default-k8s-diff-port-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 14:13:18.254353   15834 out.go:177] * The control-plane node default-k8s-diff-port-149000 host is not running: state=Stopped
	I0327 14:13:18.258242   15834 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-149000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-149000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (30.797666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (31.026208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.191435s)

                                                
                                                
-- stdout --
	* [newest-cni-871000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-871000" primary control-plane node in "newest-cni-871000" cluster
	* Restarting existing qemu2 VM for "newest-cni-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:28.413415   15891 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:28.413547   15891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:28.413551   15891 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:28.413553   15891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:28.413682   15891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:28.414727   15891 out.go:298] Setting JSON to false
	I0327 14:13:28.430868   15891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7978,"bootTime":1711566030,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 14:13:28.430941   15891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 14:13:28.435519   15891 out.go:177] * [newest-cni-871000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 14:13:28.442543   15891 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 14:13:28.442623   15891 notify.go:220] Checking for updates...
	I0327 14:13:28.446495   15891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 14:13:28.449419   15891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 14:13:28.452506   15891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 14:13:28.455548   15891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 14:13:28.458481   15891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 14:13:28.461737   15891 config.go:182] Loaded profile config "newest-cni-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 14:13:28.461971   15891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 14:13:28.466539   15891 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 14:13:28.473453   15891 start.go:297] selected driver: qemu2
	I0327 14:13:28.473459   15891 start.go:901] validating driver "qemu2" against &{Name:newest-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterNam
e:newest-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket
_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:28.473509   15891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 14:13:28.475665   15891 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 14:13:28.475703   15891 cni.go:84] Creating CNI manager for ""
	I0327 14:13:28.475713   15891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 14:13:28.475734   15891 start.go:340] cluster config:
	{Name:newest-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 14:13:28.480037   15891 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 14:13:28.491525   15891 out.go:177] * Starting "newest-cni-871000" primary control-plane node in "newest-cni-871000" cluster
	I0327 14:13:28.496481   15891 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 14:13:28.496501   15891 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 14:13:28.496514   15891 cache.go:56] Caching tarball of preloaded images
	I0327 14:13:28.496575   15891 preload.go:173] Found /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 14:13:28.496581   15891 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 14:13:28.496659   15891 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/newest-cni-871000/config.json ...
	I0327 14:13:28.497196   15891 start.go:360] acquireMachinesLock for newest-cni-871000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:28.497227   15891 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "newest-cni-871000"
	I0327 14:13:28.497237   15891 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:28.497244   15891 fix.go:54] fixHost starting: 
	I0327 14:13:28.497388   15891 fix.go:112] recreateIfNeeded on newest-cni-871000: state=Stopped err=<nil>
	W0327 14:13:28.497399   15891 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:28.500494   15891 out.go:177] * Restarting existing qemu2 VM for "newest-cni-871000" ...
	I0327 14:13:28.507436   15891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:3f:21:61:3c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:28.509453   15891 main.go:141] libmachine: STDOUT: 
	I0327 14:13:28.509481   15891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:28.509521   15891 fix.go:56] duration metric: took 12.277125ms for fixHost
	I0327 14:13:28.509532   15891 start.go:83] releasing machines lock for "newest-cni-871000", held for 12.300625ms
	W0327 14:13:28.509539   15891 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:28.509573   15891 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:28.509579   15891 start.go:728] Will try again in 5 seconds ...
	I0327 14:13:33.510893   15891 start.go:360] acquireMachinesLock for newest-cni-871000: {Name:mk9113a68659193175ae701a74354810765247fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 14:13:33.511423   15891 start.go:364] duration metric: took 400.209µs to acquireMachinesLock for "newest-cni-871000"
	I0327 14:13:33.511570   15891 start.go:96] Skipping create...Using existing machine configuration
	I0327 14:13:33.511592   15891 fix.go:54] fixHost starting: 
	I0327 14:13:33.512350   15891 fix.go:112] recreateIfNeeded on newest-cni-871000: state=Stopped err=<nil>
	W0327 14:13:33.512379   15891 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 14:13:33.518001   15891 out.go:177] * Restarting existing qemu2 VM for "newest-cni-871000" ...
	I0327 14:13:33.528366   15891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:3f:21:61:3c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18158-11341/.minikube/machines/newest-cni-871000/disk.qcow2
	I0327 14:13:33.538581   15891 main.go:141] libmachine: STDOUT: 
	I0327 14:13:33.538660   15891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 14:13:33.538766   15891 fix.go:56] duration metric: took 27.175167ms for fixHost
	I0327 14:13:33.538795   15891 start.go:83] releasing machines lock for "newest-cni-871000", held for 27.346333ms
	W0327 14:13:33.538992   15891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-871000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-871000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 14:13:33.546947   15891 out.go:177] 
	W0327 14:13:33.551056   15891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 14:13:33.551106   15891 out.go:239] * 
	* 
	W0327 14:13:33.553819   15891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 14:13:33.561014   15891 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-871000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (71.815625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-871000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-871000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (32.154917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-871000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-871000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-871000 --alsologtostderr -v=1: exit status 83 (43.282208ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-871000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-871000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 14:13:33.754220   15905 out.go:291] Setting OutFile to fd 1 ...
	I0327 14:13:33.754375   15905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:33.754378   15905 out.go:304] Setting ErrFile to fd 2...
	I0327 14:13:33.754381   15905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 14:13:33.754491   15905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 14:13:33.754727   15905 out.go:298] Setting JSON to false
	I0327 14:13:33.754736   15905 mustload.go:65] Loading cluster: newest-cni-871000
	I0327 14:13:33.754931   15905 config.go:182] Loaded profile config "newest-cni-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 14:13:33.759183   15905 out.go:177] * The control-plane node newest-cni-871000 host is not running: state=Stopped
	I0327 14:13:33.763083   15905 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-871000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-871000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (31.596042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-871000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (32.186875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-871000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 23.59
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-beta.0/json-events 26.07
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.9
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 5.9
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.25
64 TestFunctional/serial/CacheCmd/cache/add_local 1.19
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.25
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.44
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.14
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.25
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.32
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 5.04
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
266 TestNoKubernetes/serial/ProfileList 31.32
267 TestNoKubernetes/serial/Stop 3.32
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
286 TestStartStop/group/old-k8s-version/serial/Stop 3.12
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
291 TestStartStop/group/no-preload/serial/Stop 3.36
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/embed-certs/serial/Stop 3.09
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.19
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.25
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-978000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-978000: exit status 85 (97.243709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |          |
	|         | -p download-only-978000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 13:45:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 13:45:24.593176   11754 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:45:24.593346   11754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:45:24.593349   11754 out.go:304] Setting ErrFile to fd 2...
	I0327 13:45:24.593351   11754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:45:24.593463   11754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	W0327 13:45:24.593549   11754 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18158-11341/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18158-11341/.minikube/config/config.json: no such file or directory
	I0327 13:45:24.594890   11754 out.go:298] Setting JSON to true
	I0327 13:45:24.612653   11754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6294,"bootTime":1711566030,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:45:24.612721   11754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:45:24.617897   11754 out.go:97] [download-only-978000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:45:24.621745   11754 out.go:169] MINIKUBE_LOCATION=18158
	I0327 13:45:24.618056   11754 notify.go:220] Checking for updates...
	W0327 13:45:24.618098   11754 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 13:45:24.627091   11754 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:45:24.629787   11754 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:45:24.632824   11754 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:45:24.635800   11754 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	W0327 13:45:24.641778   11754 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 13:45:24.641965   11754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:45:24.644756   11754 out.go:97] Using the qemu2 driver based on user configuration
	I0327 13:45:24.644773   11754 start.go:297] selected driver: qemu2
	I0327 13:45:24.644787   11754 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:45:24.644841   11754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:45:24.647720   11754 out.go:169] Automatically selected the socket_vmnet network
	I0327 13:45:24.653062   11754 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 13:45:24.653173   11754 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:45:24.653264   11754 cni.go:84] Creating CNI manager for ""
	I0327 13:45:24.653282   11754 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 13:45:24.653330   11754 start.go:340] cluster config:
	{Name:download-only-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 13:45:24.658218   11754 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:45:24.662639   11754 out.go:97] Downloading VM boot image ...
	I0327 13:45:24.662668   11754 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso
	I0327 13:45:41.876625   11754 out.go:97] Starting "download-only-978000" primary control-plane node in "download-only-978000" cluster
	I0327 13:45:41.876666   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:45:42.146232   11754 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 13:45:42.146307   11754 cache.go:56] Caching tarball of preloaded images
	I0327 13:45:42.147684   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:45:42.151068   11754 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 13:45:42.151094   11754 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:45:42.720876   11754 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 13:46:02.565910   11754 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:02.566090   11754 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:03.263993   11754 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 13:46:03.264200   11754 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-978000/config.json ...
	I0327 13:46:03.264220   11754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-978000/config.json: {Name:mk7caafd3c9c2f6d5198e090232e1e442ddbf929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:46:03.264459   11754 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 13:46:03.265336   11754 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 13:46:03.630556   11754 out.go:169] 
	W0327 13:46:03.637681   11754 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220 0x1087f3220] Decompressors:map[bz2:0x1400051d730 gz:0x1400051d738 tar:0x1400051d6c0 tar.bz2:0x1400051d6d0 tar.gz:0x1400051d6f0 tar.xz:0x1400051d700 tar.zst:0x1400051d710 tbz2:0x1400051d6d0 tgz:0x1400051d6f0 txz:0x1400051d700 tzst:0x1400051d710 xz:0x1400051d740 zip:0x1400051d750 zst:0x1400051d748] Getters:map[file:0x14002506640 http:0x1400090c2d0 https:0x1400090c3c0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 13:46:03.637717   11754 out_reason.go:110] 
	W0327 13:46:03.644504   11754 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 13:46:03.648543   11754 out.go:169] 
	
	
	* The control-plane node download-only-978000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-978000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-978000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (23.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-255000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-255000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (23.591814625s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (23.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-255000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-255000: exit status 85 (81.085667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |                     |
	|         | -p download-only-978000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-978000        | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | -o=json --download-only        | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | -p download-only-255000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 13:46:04
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 13:46:04.323611   11805 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:46:04.323746   11805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:04.323750   11805 out.go:304] Setting ErrFile to fd 2...
	I0327 13:46:04.323752   11805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:04.323871   11805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:46:04.324967   11805 out.go:298] Setting JSON to true
	I0327 13:46:04.340881   11805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6334,"bootTime":1711566030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:46:04.340944   11805 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:46:04.346146   11805 out.go:97] [download-only-255000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:46:04.350135   11805 out.go:169] MINIKUBE_LOCATION=18158
	I0327 13:46:04.346249   11805 notify.go:220] Checking for updates...
	I0327 13:46:04.357137   11805 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:46:04.360142   11805 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:46:04.363166   11805 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:46:04.366129   11805 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	W0327 13:46:04.373106   11805 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 13:46:04.373248   11805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:46:04.376072   11805 out.go:97] Using the qemu2 driver based on user configuration
	I0327 13:46:04.376081   11805 start.go:297] selected driver: qemu2
	I0327 13:46:04.376086   11805 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:46:04.376141   11805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:46:04.379131   11805 out.go:169] Automatically selected the socket_vmnet network
	I0327 13:46:04.384364   11805 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 13:46:04.384457   11805 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:46:04.384497   11805 cni.go:84] Creating CNI manager for ""
	I0327 13:46:04.384506   11805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:46:04.384512   11805 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:46:04.384553   11805 start.go:340] cluster config:
	{Name:download-only-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:46:04.388922   11805 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:46:04.392165   11805 out.go:97] Starting "download-only-255000" primary control-plane node in "download-only-255000" cluster
	I0327 13:46:04.392174   11805 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:46:05.032846   11805 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:46:05.032940   11805 cache.go:56] Caching tarball of preloaded images
	I0327 13:46:05.034927   11805 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:46:05.038858   11805 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 13:46:05.038895   11805 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:05.617546   11805 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 13:46:20.773255   11805 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:20.773434   11805 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:21.331247   11805 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 13:46:21.331459   11805 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-255000/config.json ...
	I0327 13:46:21.331474   11805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-255000/config.json: {Name:mkc9fe9ac11b9f990c6ab8d90ad641045353d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:46:21.331720   11805 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 13:46:21.332534   11805 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-255000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-255000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-255000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (26.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-231000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-231000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 : (26.073343166s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (26.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-231000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-231000: exit status 85 (77.736958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:45 PDT |                     |
	|         | -p download-only-978000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-978000             | download-only-978000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | -o=json --download-only             | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | -p download-only-255000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| delete  | -p download-only-255000             | download-only-255000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT | 27 Mar 24 13:46 PDT |
	| start   | -o=json --download-only             | download-only-231000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 13:46 PDT |                     |
	|         | -p download-only-231000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 13:46:28
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 13:46:28.457661   11843 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:46:28.457794   11843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:28.457797   11843 out.go:304] Setting ErrFile to fd 2...
	I0327 13:46:28.457799   11843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:46:28.457924   11843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:46:28.459054   11843 out.go:298] Setting JSON to true
	I0327 13:46:28.475240   11843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6358,"bootTime":1711566030,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:46:28.475302   11843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:46:28.480212   11843 out.go:97] [download-only-231000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:46:28.484028   11843 out.go:169] MINIKUBE_LOCATION=18158
	I0327 13:46:28.480327   11843 notify.go:220] Checking for updates...
	I0327 13:46:28.492089   11843 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:46:28.495132   11843 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:46:28.498136   11843 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:46:28.501169   11843 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	W0327 13:46:28.507112   11843 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 13:46:28.507279   11843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:46:28.510106   11843 out.go:97] Using the qemu2 driver based on user configuration
	I0327 13:46:28.510113   11843 start.go:297] selected driver: qemu2
	I0327 13:46:28.510116   11843 start.go:901] validating driver "qemu2" against <nil>
	I0327 13:46:28.510173   11843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 13:46:28.513066   11843 out.go:169] Automatically selected the socket_vmnet network
	I0327 13:46:28.518205   11843 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 13:46:28.518297   11843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 13:46:28.518332   11843 cni.go:84] Creating CNI manager for ""
	I0327 13:46:28.518342   11843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 13:46:28.518352   11843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 13:46:28.518397   11843 start.go:340] cluster config:
	{Name:download-only-231000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:46:28.522618   11843 iso.go:125] acquiring lock: {Name:mk61905f213eb14d03fe7910fafe1b5f69888ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 13:46:28.526134   11843 out.go:97] Starting "download-only-231000" primary control-plane node in "download-only-231000" cluster
	I0327 13:46:28.526144   11843 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 13:46:29.165928   11843 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 13:46:29.165993   11843 cache.go:56] Caching tarball of preloaded images
	I0327 13:46:29.166392   11843 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 13:46:29.170760   11843 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 13:46:29.170786   11843 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:29.743862   11843 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:e2591d3d8d44bfdea8fdcdf9682f34bf -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 13:46:45.822149   11843 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:45.822313   11843 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 13:46:46.366093   11843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 13:46:46.366290   11843 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-231000/config.json ...
	I0327 13:46:46.366306   11843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18158-11341/.minikube/profiles/download-only-231000/config.json: {Name:mkef212b783ab3cf05e963c822640e0e00d928d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 13:46:46.366537   11843 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 13:46:46.366652   11843 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18158-11341/.minikube/cache/darwin/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-231000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-231000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-231000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-626000 --alsologtostderr --binary-mirror http://127.0.0.1:52074 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-626000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-626000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-714000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-714000: exit status 85 (58.437958ms)

                                                
                                                
-- stdout --
	* Profile "addons-714000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-714000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-714000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-714000: exit status 85 (62.344ms)

                                                
                                                
-- stdout --
	* Profile "addons-714000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-714000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.9s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status: exit status 7 (33.030083ms)

                                                
                                                
-- stdout --
	nospam-959000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status: exit status 7 (31.991125ms)

                                                
                                                
-- stdout --
	nospam-959000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status: exit status 7 (32.016833ms)

                                                
                                                
-- stdout --
	nospam-959000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause: exit status 83 (42.068666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause: exit status 83 (41.864042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause: exit status 83 (42.891333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause: exit status 83 (41.695834ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause: exit status 83 (41.847167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause: exit status 83 (41.732542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-959000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop: (1.828829333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop: (1.99018825s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-959000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-959000 stop: (2.078528666s)
--- PASS: TestErrorSpam/stop (5.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18158-11341/.minikube/files/etc/test/nested/copy/11752/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.1: (2.181226209s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.3: (2.2304725s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:latest: (1.840359583s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2592284174/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add minikube-local-cache-test:functional-334000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache delete minikube-local-cache-test:functional-334000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 config get cpus: exit status 14 (32.181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 config get cpus: exit status 14 (39.099459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (172.582041ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:48:46.884320   12500 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:48:46.884497   12500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:46.884501   12500 out.go:304] Setting ErrFile to fd 2...
	I0327 13:48:46.884505   12500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:46.884689   12500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:48:46.886266   12500 out.go:298] Setting JSON to false
	I0327 13:48:46.905605   12500 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6496,"bootTime":1711566030,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:48:46.905663   12500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:48:46.911228   12500 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 13:48:46.919223   12500 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:48:46.919263   12500 notify.go:220] Checking for updates...
	I0327 13:48:46.926211   12500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:48:46.929175   12500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:48:46.932195   12500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:48:46.935111   12500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:48:46.938146   12500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:48:46.941536   12500 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:48:46.941839   12500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:48:46.953763   12500 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 13:48:46.961277   12500 start.go:297] selected driver: qemu2
	I0327 13:48:46.961288   12500 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:48:46.961359   12500 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:48:46.968169   12500 out.go:177] 
	W0327 13:48:46.972178   12500 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 13:48:46.975104   12500 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.197292ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 13:48:47.122486   12511 out.go:291] Setting OutFile to fd 1 ...
	I0327 13:48:47.122605   12511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.122608   12511 out.go:304] Setting ErrFile to fd 2...
	I0327 13:48:47.122611   12511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 13:48:47.122742   12511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18158-11341/.minikube/bin
	I0327 13:48:47.124168   12511 out.go:298] Setting JSON to false
	I0327 13:48:47.141063   12511 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6497,"bootTime":1711566030,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 13:48:47.141140   12511 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 13:48:47.146198   12511 out.go:177] * [functional-334000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0327 13:48:47.153213   12511 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 13:48:47.157275   12511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	I0327 13:48:47.153255   12511 notify.go:220] Checking for updates...
	I0327 13:48:47.163140   12511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 13:48:47.166233   12511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 13:48:47.167776   12511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	I0327 13:48:47.171233   12511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 13:48:47.174461   12511 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 13:48:47.174735   12511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 13:48:47.178999   12511 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0327 13:48:47.186223   12511 start.go:297] selected driver: qemu2
	I0327 13:48:47.186229   12511 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 13:48:47.186299   12511 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 13:48:47.193141   12511 out.go:177] 
	W0327 13:48:47.197222   12511 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 13:48:47.201235   12511 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.246493625s)
--- PASS: TestFunctional/parallel/License (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.402898666s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image rm gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-334000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image save --daemon gcr.io/google-containers/addon-resizer:functional-334000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "73.258708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.938458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "74.133167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.820833ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013649625s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-334000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.25s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-160000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-160000 --output=json --user=testUser: (3.252198875s)
--- PASS: TestJSONOutput/stop/Command (3.25s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-269000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-269000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.877333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"11aace36-62e7-40f9-b7da-f12006bc9850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-269000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"691f95ad-ea20-4cff-bd63-e752077e698f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18158"}}
	{"specversion":"1.0","id":"1155d937-adf8-4e6f-a8ad-7ecf719c6305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig"}}
	{"specversion":"1.0","id":"1218c166-2fa4-4935-af4e-e89e3905fb7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"11c9c80c-2138-4745-9b51-9332218b6ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e12ec207-e945-4692-8fb9-0193993a810e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube"}}
	{"specversion":"1.0","id":"029d3be3-7778-4860-bd49-a1a0356c62ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"742a2522-256f-4aa5-a896-35e12dbc81c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-269000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-269000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-529000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.846625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18158-11341/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18158-11341/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-529000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-529000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.506042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-529000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.629255917s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.685495709s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-529000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-529000: (3.315455833s)
--- PASS: TestNoKubernetes/serial/Stop (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-529000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-529000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.780125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-529000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-077000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-462000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-462000 --alsologtostderr -v=3: (3.123863875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (58.819041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-462000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-063000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-063000 --alsologtostderr -v=3: (3.362928834s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-063000 -n no-preload-063000: exit status 7 (57.139291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-063000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-995000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-995000 --alsologtostderr -v=3: (3.090597958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (58.363375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-995000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-149000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-149000 --alsologtostderr -v=3: (3.184894916s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-149000 -n default-k8s-diff-port-149000: exit status 7 (57.122667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-149000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-871000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-871000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-871000 --alsologtostderr -v=3: (3.25152425s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-871000 -n newest-cni-871000: exit status 7 (59.777625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-871000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3108170902/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711572489339642000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3108170902/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711572489339642000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3108170902/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711572489339642000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3108170902/001/test-1711572489339642000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.400166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.532625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.644625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.028792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.528292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.5655ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.321875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p": exit status 83 (49.723583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3108170902/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2026924179/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.183708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.0895ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.85175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.220625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.396291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.091041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.306ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p": exit status 83 (47.679542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2026924179/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (82.33625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (88.017834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (87.317875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (86.6375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (86.998041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (89.420917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 83 (87.738542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-334000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1739764631/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.95s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-487000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                
----------------------- debugLogs end: cilium-487000 [took: 2.297304167s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-487000
--- SKIP: TestNetworkPlugins/group/cilium (2.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-595000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard